Перевод: с английского на все языки

со всех языков на английский

process of evolution

  • 1 process

    I 1. noun
    1) (of time or history) Lauf, der

    he learnt a lot in the processer lernte eine Menge dabei

    be in processin Gang sein

    2) (proceeding) Vorgang, der; Prozedur, die
    3) (method) Verfahren, das; see also academic.ru/23789/elimination">elimination 1)
    4) (natural operation) Prozess, der; Vorgang, der

    process of evolution — Evolutionsprozess, der

    2. transitive verb
    verarbeiten [Rohstoff, Signal, Daten]; bearbeiten [Antrag, Akte, Darlehen]; (for conservation) behandeln [Leder, Lebensmittel]; (Photog.) entwickeln [Film]
    II
    [prə'ses] intransitive verb ziehen
    * * *
    ['prəuses, ]( American[) 'pro-] 1. noun
    1) (a method or way of manufacturing things: We are using a new process to make glass.) das Verfahren
    2) (a series of events that produce change or development: The process of growing up can be difficult for a child; the digestive processes.) der Prozeß
    3) (a course of action undertaken: Carrying him down the mountain was a slow process.) der Vorgang
    2. verb
    (to deal with (something) by the appropriate process: Have your photographs been processed?; The information is being processed by computer.) bearbeiten
    - processed
    - in the process of
    * * *
    pro·cess1
    [ˈprəʊses, AM ˈprɑ:-]
    I. n
    <pl -es>
    1. (set of actions) Prozess m
    \process of ageing Alterungsprozess m
    by a \process of elimination durch Auslese
    by a \process of trial and error durch [stetes] Ausprobieren, auf dem Weg der Empirie geh
    digestive \process Verdauungsvorgang m
    2. (method) Verfahren nt
    a new \process for treating breast cancer eine neue Methode zur Behandlung von Brustkrebs
    to develop a new \process ein neues Verfahren entwickeln
    3. no pl (going on) Verlauf m
    in \process im Gange
    in the \process dabei
    to be in the \process of doing sth dabei sein, etw zu tun
    4. ANAT Fortsatz m
    5. (summons) gerichtliche Verfügung
    to serve sb a \process [or a \process on sb] jdn vorladen
    II. vt
    1. (deal with)
    to \process sth etw bearbeiten
    to \process an application/a document/the mail einen Antrag/ein Dokument/die Post bearbeiten
    to \process sb's papers [or paperwork] jds Papiere durcharbeiten
    to \process sb jdn abfertigen
    to \process data/information Daten/Informationen verarbeiten [o aufbereiten
    to \process sth etw verstehen [o [geistig] verarbeiten
    4. (treat)
    to \process sth etw bearbeiten [o behandeln]
    to \process beans for freezing/canning Bohnen zum Einfrieren/Einmachen verarbeiten
    to \process food Nahrungsmittel haltbar machen [o konservieren]
    to \process raw materials Rohstoffe [weiter]verarbeiten
    to \process milk Milch sterilisieren
    5. PHOT
    to \process a film einen Film entwickeln
    pro·cess2
    [prə(ʊ)ˈses, AM prəˈ-]
    vi ( form) [in einer Prozession] mitgehen
    * * *
    I ['prəʊses]
    1. n
    1) Prozess m

    the process of time will... —

    in the process of timeim Laufe der Zeit, mit der Zeit

    to be in the process of doing sth — dabei sein, etw zu tun

    2) (= specific method, technique) Verfahren nt; (IND) Prozess m, Verfahren nt
    3) (JUR) Prozess m, Verfahren nt

    a process of a bone/of the jaw — ein Knochen-/Kiefernvorsprung m

    2. vt
    (= treat) raw materials, data, information, waste verarbeiten; food konservieren; milk sterilisieren; application, loan, wood bearbeiten; film entwickeln; (= deal with) applicants, people abfertigen II [prə'ses]
    vi
    (Brit: go in procession) ziehen, schreiten
    * * *
    process1 [ˈprəʊses; US auch ˈprɑ-]
    A s
    1. auch TECH Verfahren n, Prozess m:
    a) Herstellungsverfahren,
    b) Herstellungsprozess, -vorgang m, Werdegang m;
    in process of construction im Bau (befindlich);
    be in the process of doing sth dabei sein, etwas zu tun;
    process annealing METALL Zwischenglühung f;
    process average mittlere Fertigungsgüte;
    process automation Prozessautomatisierung f;
    process chart WIRTSCH Arbeitsablaufdiagramm n;
    process control IT Prozesssteuerung f;
    process engineering Verfahrenstechnik f;
    process steam TECH Betriebsdampf m;
    process water TECH Betriebswasser n
    2. Vorgang m, Verlauf m, Prozess m ( auch PHYS):
    process of combustion Verbrennungsvorgang;
    processes of life Lebensvorgänge;
    mental process, process of thinking Denkprozess
    3. Arbeitsgang m
    4. Fortgang m, -schreiten n, (Ver)Lauf m (der Zeit):
    in process of time im Laufe der Zeit;
    be in process im Gange sein, sich abwickeln;
    in process of im Verlauf von (od gen);
    the machine was damaged in the process dabei wurde die Maschine beschädigt
    5. CHEM
    a) A 1, A 2:
    process cheese bes US Schmelzkäse m
    b) Reaktionsfolge f
    6. TYPO fotomechanisches Reproduktionsverfahren:
    process printing Drei- oder Vierfarbendruck m
    7. FOTO Übereinanderkopieren n
    8. JUR
    a) Zustellung(en) f(pl), besonders Vorladung f
    b) Rechtsgang m, (Gerichts)Verfahren n:
    due process of law ordentliches Verfahren, rechtliches Gehör
    9. ANAT Fortsatz m
    10. BOT Auswuchs m
    11. fig Vorsprung m
    12. MATH Auflösungsverfahren n (einer Aufgabe)
    B v/t
    1. bearbeiten, behandeln, einem Verfahren unterwerfen
    2. verarbeiten, Lebensmittel haltbar machen, Milch etc sterilisieren, (chemisch) behandeln, Stoff imprägnieren, Rohstoffe etc aufbereiten:
    process into verarbeiten zu;
    process information Daten verarbeiten;
    processed cheese Schmelzkäse m
    3. JUR
    a) vorladen
    b) gerichtlich belangen
    4. FOTO (fotomechanisch) reproduzieren oder vervielfältigen
    5. fig jemandes Fall etc bearbeiten
    process2 [prəˈses] v/i besonders Br
    1. in einer Prozession (mit)gehen
    2. ziehen
    proc. abk
    * * *
    I 1. noun
    1) (of time or history) Lauf, der
    2) (proceeding) Vorgang, der; Prozedur, die
    3) (method) Verfahren, das; see also elimination 1)
    4) (natural operation) Prozess, der; Vorgang, der

    process of evolution — Evolutionsprozess, der

    2. transitive verb
    verarbeiten [Rohstoff, Signal, Daten]; bearbeiten [Antrag, Akte, Darlehen]; (for conservation) behandeln [Leder, Lebensmittel]; (Photog.) entwickeln [Film]
    II
    [prə'ses] intransitive verb ziehen
    * * *
    n.
    (§ pl.: processes)
    = Arbeitsgang m.
    Prozess -e m.
    Vorgang -¨e m. v.
    entwickeln v.
    verarbeiten v.
    weiter verarbeiten ausdr.

    English-german dictionary > process

  • 2 evolution

    1. n развитие; процесс изменения, роста
    2. n эволюция, постепенное развитие

    Theory of Evolution — теория эволюции, дарвинизм

    3. n развитие, развёртывание
    4. n l
    5. n изгибы, завитки

    the evolutions of an arabesque pattern — причудливые изгибы, арабески

    6. n фигуры
    7. n воен. мор. перестроение; манёвр, передвижение
    8. n мат. извлечение корня
    9. n спец. выделение
    10. n спец. образование
    Синонимический ряд:
    1. development (noun) change; development; evolvement; flowering; growth; growth and change; maturation; natural adaptation; natural process; progression; unfolding; upgrowth
    2. progress (noun) elaboration; expansion; gain; outgrowth; progress; rise

    English-Russian base dictionary > evolution

  • 3 evolution

    English-Russian electronics dictionary > evolution

  • 4 evolution

    The New English-Russian Dictionary of Radio-electronics > evolution

  • 5 process of change

    [D] [v.] évolution

    English-French dictionary of law, politics, economics & finance > process of change

  • 6 process evolution

    English-Russian electronics dictionary > process evolution

  • 7 process evolution

    The New English-Russian Dictionary of Radio-electronics > process evolution

  • 8 generic planning process

    1. процесс общего планирования

     

    процесс общего планирования
    Высокоуровневый процесс развития планирования Игр в период деятельности ОКОИ с момента основания и до роспуска. Этапы планирования с указанием точных сроков каждого определяются в соответствии со спецификой конкретного ОКОИ.
    [Департамент лингвистических услуг Оргкомитета «Сочи 2014». Глоссарий терминов]

    EN

    generic planning process
    High-level process that describes the evolution in the Games planning during the lifecycle of the OCOG from foundation through to dissolution. The exact planning phases and timing of each phase are adapted to fit the context of the specific OCOG.
    [Департамент лингвистических услуг Оргкомитета «Сочи 2014». Глоссарий терминов]

    Тематики

    EN

    Англо-русский словарь нормативно-технической терминологии > generic planning process

  • 9 historical evolution

    1. эволюция в процессе истории

     

    эволюция в процессе истории

    [ http://www.eionet.europa.eu/gemet/alphabetic?langcode=en]

    EN

    historical evolution
    The process by which small but cumulative changes in the learned, nonrandom, systematic behavior and knowledge of a people occur from generation to generation. (Source: ANT)
    [http://www.eionet.europa.eu/gemet/alphabetic?langcode=en]

    Тематики

    EN

    DE

    FR

    Англо-русский словарь нормативно-технической терминологии > historical evolution

  • 10 pollutant evolution

    1. эволюция загрязнения

     

    эволюция загрязнения

    [ http://www.eionet.europa.eu/gemet/alphabetic?langcode=en]

    EN

    pollutant evolution
    The process of cumulative reactive change following the introduction of a pollutant into the environment. (Source: ALL)
    [http://www.eionet.europa.eu/gemet/alphabetic?langcode=en]

    Тематики

    EN

    DE

    FR

    Англо-русский словарь нормативно-технической терминологии > pollutant evolution

  • 11 the process of crack evolution

    Универсальный англо-русский словарь > the process of crack evolution

  • 12 (an) inevitable step in the evolution process

    English-Russian combinatory dictionary > (an) inevitable step in the evolution process

  • 13 Artificial Intelligence

       In my opinion, none of [these programs] does even remote justice to the complexity of human mental processes. Unlike men, "artificially intelligent" programs tend to be single minded, undistractable, and unemotional. (Neisser, 1967, p. 9)
       Future progress in [artificial intelligence] will depend on the development of both practical and theoretical knowledge.... As regards theoretical knowledge, some have sought a unified theory of artificial intelligence. My view is that artificial intelligence is (or soon will be) an engineering discipline since its primary goal is to build things. (Nilsson, 1971, pp. vii-viii)
       Most workers in AI [artificial intelligence] research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the last 25 years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from being realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised.... In the meantime, claims and predictions regarding the potential results of AI research had been publicized which went even farther than the expectations of the majority of workers in the field, whose embarrassments have been added to by the lamentable failure of such inflated predictions....
       When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents "another step in the general process of evolution"; that possibilities in the 1980s include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000 [one has the right to be skeptical]. (Lighthill, 1972, p. 17)
       4) Just as Astronomy Succeeded Astrology, the Discovery of Intellectual Processes in Machines Should Lead to a Science, Eventually
       Just as astronomy succeeded astrology, following Kepler's discovery of planetary regularities, the discoveries of these many principles in empirical explorations on intellectual processes in machines should lead to a science, eventually. (Minsky & Papert, 1973, p. 11)
       Many problems arise in experiments on machine intelligence because things obvious to any person are not represented in any program. One can pull with a string, but one cannot push with one.... Simple facts like these caused serious problems when Charniak attempted to extend Bobrow's "Student" program to more realistic applications, and they have not been faced up to until now. (Minsky & Papert, 1973, p. 77)
       What do we mean by [a symbolic] "description"? We do not mean to suggest that our descriptions must be made of strings of ordinary language words (although they might be). The simplest kind of description is a structure in which some features of a situation are represented by single ("primitive") symbols, and relations between those features are represented by other symbols-or by other features of the way the description is put together. (Minsky & Papert, 1973, p. 11)
       [AI is] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular. (Boden, 1977, p. 5)
       The word you look for and hardly ever see in the early AI literature is the word knowledge. They didn't believe you have to know anything, you could always rework it all.... In fact 1967 is the turning point in my mind when there was enough feeling that the old ideas of general principles had to go.... I came up with an argument for what I called the primacy of expertise, and at the time I called the other guys the generalists. (Moses, quoted in McCorduck, 1979, pp. 228-229)
       9) Artificial Intelligence Is Psychology in a Particularly Pure and Abstract Form
       The basic idea of cognitive science is that intelligent beings are semantic engines-in other words, automatic formal systems with interpretations under which they consistently make sense. We can now see why this includes psychology and artificial intelligence on a more or less equal footing: people and intelligent computers (if and when there are any) turn out to be merely different manifestations of the same underlying phenomenon. Moreover, with universal hardware, any semantic engine can in principle be formally imitated by a computer if only the right program can be found. And that will guarantee semantic imitation as well, since (given the appropriate formal behavior) the semantics is "taking care of itself" anyway. Thus we also see why, from this perspective, artificial intelligence can be regarded as psychology in a particularly pure and abstract form. The same fundamental structures are under investigation, but in AI, all the relevant parameters are under direct experimental control (in the programming), without any messy physiology or ethics to get in the way. (Haugeland, 1981b, p. 31)
       There are many different kinds of reasoning one might imagine:
        Formal reasoning involves the syntactic manipulation of data structures to deduce new ones following prespecified rules of inference. Mathematical logic is the archetypical formal representation. Procedural reasoning uses simulation to answer questions and solve problems. When we use a program to answer What is the sum of 3 and 4? it uses, or "runs," a procedural model of arithmetic. Reasoning by analogy seems to be a very natural mode of thought for humans but, so far, difficult to accomplish in AI programs. The idea is that when you ask the question Can robins fly? the system might reason that "robins are like sparrows, and I know that sparrows can fly, so robins probably can fly."
        Generalization and abstraction are also natural reasoning process for humans that are difficult to pin down well enough to implement in a program. If one knows that Robins have wings, that Sparrows have wings, and that Blue jays have wings, eventually one will believe that All birds have wings. This capability may be at the core of most human learning, but it has not yet become a useful technique in AI.... Meta- level reasoning is demonstrated by the way one answers the question What is Paul Newman's telephone number? You might reason that "if I knew Paul Newman's number, I would know that I knew it, because it is a notable fact." This involves using "knowledge about what you know," in particular, about the extent of your knowledge and about the importance of certain facts. Recent research in psychology and AI indicates that meta-level reasoning may play a central role in human cognitive processing. (Barr & Feigenbaum, 1981, pp. 146-147)
       Suffice it to say that programs already exist that can do things-or, at the very least, appear to be beginning to do things-which ill-informed critics have asserted a priori to be impossible. Examples include: perceiving in a holistic as opposed to an atomistic way; using language creatively; translating sensibly from one language to another by way of a language-neutral semantic representation; planning acts in a broad and sketchy fashion, the details being decided only in execution; distinguishing between different species of emotional reaction according to the psychological context of the subject. (Boden, 1981, p. 33)
       Can the synthesis of Man and Machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens-and I have... good reasons for thinking that it must-we have nothing to regret and certainly nothing to fear. (Clarke, 1984, p. 243)
       The thesis of GOFAI... is not that the processes underlying intelligence can be described symbolically... but that they are symbolic. (Haugeland, 1985, p. 113)
        14) Artificial Intelligence Provides a Useful Approach to Psychological and Psychiatric Theory Formation
       It is all very well formulating psychological and psychiatric theories verbally but, when using natural language (even technical jargon), it is difficult to recognise when a theory is complete; oversights are all too easily made, gaps too readily left. This is a point which is generally recognised to be true and it is for precisely this reason that the behavioural sciences attempt to follow the natural sciences in using "classical" mathematics as a more rigorous descriptive language. However, it is an unfortunate fact that, with a few notable exceptions, there has been a marked lack of success in this application. It is my belief that a different approach-a different mathematics-is needed, and that AI provides just this approach. (Hand, quoted in Hand, 1985, pp. 6-7)
       We might distinguish among four kinds of AI.
       Research of this kind involves building and programming computers to perform tasks which, to paraphrase Marvin Minsky, would require intelligence if they were done by us. Researchers in nonpsychological AI make no claims whatsoever about the psychological realism of their programs or the devices they build, that is, about whether or not computers perform tasks as humans do.
       Research here is guided by the view that the computer is a useful tool in the study of mind. In particular, we can write computer programs or build devices that simulate alleged psychological processes in humans and then test our predictions about how the alleged processes work. We can weave these programs and devices together with other programs and devices that simulate different alleged mental processes and thereby test the degree to which the AI system as a whole simulates human mentality. According to weak psychological AI, working with computer models is a way of refining and testing hypotheses about processes that are allegedly realized in human minds.
    ... According to this view, our minds are computers and therefore can be duplicated by other computers. Sherry Turkle writes that the "real ambition is of mythic proportions, making a general purpose intelligence, a mind." (Turkle, 1984, p. 240) The authors of a major text announce that "the ultimate goal of AI research is to build a person or, more humbly, an animal." (Charniak & McDermott, 1985, p. 7)
       Research in this field, like strong psychological AI, takes seriously the functionalist view that mentality can be realized in many different types of physical devices. Suprapsychological AI, however, accuses strong psychological AI of being chauvinisticof being only interested in human intelligence! Suprapsychological AI claims to be interested in all the conceivable ways intelligence can be realized. (Flanagan, 1991, pp. 241-242)
        16) Determination of Relevance of Rules in Particular Contexts
       Even if the [rules] were stored in a context-free form the computer still couldn't use them. To do that the computer requires rules enabling it to draw on just those [ rules] which are relevant in each particular context. Determination of relevance will have to be based on further facts and rules, but the question will again arise as to which facts and rules are relevant for making each particular determination. One could always invoke further facts and rules to answer this question, but of course these must be only the relevant ones. And so it goes. It seems that AI workers will never be able to get started here unless they can settle the problem of relevance beforehand by cataloguing types of context and listing just those facts which are relevant in each. (Dreyfus & Dreyfus, 1986, p. 80)
       Perhaps the single most important idea to artificial intelligence is that there is no fundamental difference between form and content, that meaning can be captured in a set of symbols such as a semantic net. (G. Johnson, 1986, p. 250)
        18) The Assumption That the Mind Is a Formal System
       Artificial intelligence is based on the assumption that the mind can be described as some kind of formal system manipulating symbols that stand for things in the world. Thus it doesn't matter what the brain is made of, or what it uses for tokens in the great game of thinking. Using an equivalent set of tokens and rules, we can do thinking with a digital computer, just as we can play chess using cups, salt and pepper shakers, knives, forks, and spoons. Using the right software, one system (the mind) can be mapped into the other (the computer). (G. Johnson, 1986, p. 250)
        19) A Statement of the Primary and Secondary Purposes of Artificial Intelligence
       The primary goal of Artificial Intelligence is to make machines smarter.
       The secondary goals of Artificial Intelligence are to understand what intelligence is (the Nobel laureate purpose) and to make machines more useful (the entrepreneurial purpose). (Winston, 1987, p. 1)
       The theoretical ideas of older branches of engineering are captured in the language of mathematics. We contend that mathematical logic provides the basis for theory in AI. Although many computer scientists already count logic as fundamental to computer science in general, we put forward an even stronger form of the logic-is-important argument....
       AI deals mainly with the problem of representing and using declarative (as opposed to procedural) knowledge. Declarative knowledge is the kind that is expressed as sentences, and AI needs a language in which to state these sentences. Because the languages in which this knowledge usually is originally captured (natural languages such as English) are not suitable for computer representations, some other language with the appropriate properties must be used. It turns out, we think, that the appropriate properties include at least those that have been uppermost in the minds of logicians in their development of logical languages such as the predicate calculus. Thus, we think that any language for expressing knowledge in AI systems must be at least as expressive as the first-order predicate calculus. (Genesereth & Nilsson, 1987, p. viii)
        21) Perceptual Structures Can Be Represented as Lists of Elementary Propositions
       In artificial intelligence studies, perceptual structures are represented as assemblages of description lists, the elementary components of which are propositions asserting that certain relations hold among elements. (Chase & Simon, 1988, p. 490)
       Artificial intelligence (AI) is sometimes defined as the study of how to build and/or program computers to enable them to do the sorts of things that minds can do. Some of these things are commonly regarded as requiring intelligence: offering a medical diagnosis and/or prescription, giving legal or scientific advice, proving theorems in logic or mathematics. Others are not, because they can be done by all normal adults irrespective of educational background (and sometimes by non-human animals too), and typically involve no conscious control: seeing things in sunlight and shadows, finding a path through cluttered terrain, fitting pegs into holes, speaking one's own native tongue, and using one's common sense. Because it covers AI research dealing with both these classes of mental capacity, this definition is preferable to one describing AI as making computers do "things that would require intelligence if done by people." However, it presupposes that computers could do what minds can do, that they might really diagnose, advise, infer, and understand. One could avoid this problematic assumption (and also side-step questions about whether computers do things in the same way as we do) by defining AI instead as "the development of computers whose observable performance has features which in humans we would attribute to mental processes." This bland characterization would be acceptable to some AI workers, especially amongst those focusing on the production of technological tools for commercial purposes. But many others would favour a more controversial definition, seeing AI as the science of intelligence in general-or, more accurately, as the intellectual core of cognitive science. As such, its goal is to provide a systematic theory that can explain (and perhaps enable us to replicate) both the general categories of intentionality and the diverse psychological capacities grounded in them. (Boden, 1990b, pp. 1-2)
       Because the ability to store data somewhat corresponds to what we call memory in human beings, and because the ability to follow logical procedures somewhat corresponds to what we call reasoning in human beings, many members of the cult have concluded that what computers do somewhat corresponds to what we call thinking. It is no great difficulty to persuade the general public of that conclusion since computers process data very fast in small spaces well below the level of visibility; they do not look like other machines when they are at work. They seem to be running along as smoothly and silently as the brain does when it remembers and reasons and thinks. On the other hand, those who design and build computers know exactly how the machines are working down in the hidden depths of their semiconductors. Computers can be taken apart, scrutinized, and put back together. Their activities can be tracked, analyzed, measured, and thus clearly understood-which is far from possible with the brain. This gives rise to the tempting assumption on the part of the builders and designers that computers can tell us something about brains, indeed, that the computer can serve as a model of the mind, which then comes to be seen as some manner of information processing machine, and possibly not as good at the job as the machine. (Roszak, 1994, pp. xiv-xv)
       The inner workings of the human mind are far more intricate than the most complicated systems of modern technology. Researchers in the field of artificial intelligence have been attempting to develop programs that will enable computers to display intelligent behavior. Although this field has been an active one for more than thirty-five years and has had many notable successes, AI researchers still do not know how to create a program that matches human intelligence. No existing program can recall facts, solve problems, reason, learn, and process language with human facility. This lack of success has occurred not because computers are inferior to human brains but rather because we do not yet know in sufficient detail how intelligence is organized in the brain. (Anderson, 1995, p. 2)

    Historical dictionary of quotations in cognitive science > Artificial Intelligence

  • 14 PIE

    1) Компьютерная техника: Process Instance Evolution
    2) Медицина: Pharynx And Intestinal Excess, Physicians Insurance Exchange, pulmonary infiltrates with eosinophilia (инфильтраты легких, сопровождающиеся эозинофилией)
    7) Сельское хозяйство: Pulsed Irrigation Evacuation
    8) Шутливое выражение: Paleontologists In Exile
    9) Математика: Principle Of Inclusion And Exclusion
    11) Лингвистика: Proto-Indo-European language
    12) Биржевой термин: Plan Invest Enjoy
    13) Металлургия: по индивидуальному запросу ((Per individual enquiry), Аббревиатура часто используется в американском английском при обсуждении поставок и цен)
    17) Физиология: Pressure Ice And Elevation
    18) Электроника: Personal Interactive Electronics
    21) Транспорт: Pan Island Expressway
    22) Фирменный знак: Penang Indian Entrepreneurs
    26) Безопасность: Propagation Infection And Execution
    27) Фантастика Paranormal Interpol Enterprise

    Универсальный англо-русский словарь > PIE

  • 15 pie

    1) Компьютерная техника: Process Instance Evolution
    2) Медицина: Pharynx And Intestinal Excess, Physicians Insurance Exchange, pulmonary infiltrates with eosinophilia (инфильтраты легких, сопровождающиеся эозинофилией)
    7) Сельское хозяйство: Pulsed Irrigation Evacuation
    8) Шутливое выражение: Paleontologists In Exile
    9) Математика: Principle Of Inclusion And Exclusion
    11) Лингвистика: Proto-Indo-European language
    12) Биржевой термин: Plan Invest Enjoy
    13) Металлургия: по индивидуальному запросу ((Per individual enquiry), Аббревиатура часто используется в американском английском при обсуждении поставок и цен)
    17) Физиология: Pressure Ice And Elevation
    18) Электроника: Personal Interactive Electronics
    21) Транспорт: Pan Island Expressway
    22) Фирменный знак: Penang Indian Entrepreneurs
    26) Безопасность: Propagation Infection And Execution
    27) Фантастика Paranormal Interpol Enterprise

    Универсальный англо-русский словарь > pie

  • 16 anagenesis

    [ˌænə'ʤenəsɪs]
    сущ.; мн. anageneses; биол.
    1) анагенез (вид эволюции, в процессе которой органы приобретают новую специализацию, а также могут появляться новые органы)

    The process of evolution may be either progressive (Anagenesis) or retrogressive (Catagenesis). — Процесс эволюции может быть прогрессирующий (анагенез) и регрессирующий (катагенез).

    Ant:

    Англо-русский современный словарь > anagenesis

  • 17 Bibliography

     ■ Aitchison, J. (1987). Noam Chomsky: Consensus and controversy. New York: Falmer Press.
     ■ Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco: W. H. Freeman.
     ■ Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
     ■ Anderson, J. R. (1995). Cognitive psychology and its implications (4th ed.). New York: W. H. Freeman.
     ■ Archilochus (1971). In M. L. West (Ed.), Iambi et elegi graeci (Vol. 1). Oxford: Oxford University Press.
     ■ Armstrong, D. M. (1990). The causal theory of the mind. In W. G. Lycan (Ed.), Mind and cognition: A reader (pp. 37-47). Cambridge, MA: Basil Blackwell. (Originally published in 1981 in The nature of mind and other essays, Ithaca, NY: University Press).
     ■ Atkins, P. W. (1992). Creation revisited. Oxford: W. H. Freeman & Company.
     ■ Austin, J. L. (1962). How to do things with words. Cambridge, MA: Harvard University Press.
     ■ Bacon, F. (1878). Of the proficience and advancement of learning divine and human. In The works of Francis Bacon (Vol. 1). Cambridge, MA: Hurd & Houghton.
     ■ Bacon, R. (1928). Opus majus (Vol. 2). R. B. Burke (Trans.). Philadelphia, PA: University of Pennsylvania Press.
     ■ Bar-Hillel, Y. (1960). The present status of automatic translation of languages. In F. L. Alt (Ed.), Advances in computers (Vol. 1). New York: Academic Press.
     ■ Barr, A., & E. A. Feigenbaum (Eds.) (1981). The handbook of artificial intelligence (Vol. 1). Reading, MA: Addison-Wesley.
     ■ Barr, A., & E. A. Feigenbaum (Eds.) (1982). The handbook of artificial intelligence (Vol. 2). Los Altos, CA: William Kaufman.
     ■ Barron, F. X. (1963). The needs for order and for disorder as motives in creative activity. In C. W. Taylor & F. X. Barron (Eds.), Scientific creativity: Its rec ognition and development (pp. 153-160). New York: Wiley.
     ■ Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press.
     ■ Bartley, S. H. (1969). Principles of perception. London: Harper & Row.
     ■ Barzun, J. (1959). The house of intellect. New York: Harper & Row.
     ■ Beach, F. A., D. O. Hebb, C. T. Morgan & H. W. Nissen (Eds.) (1960). The neu ropsychology of Lashley. New York: McGraw-Hill.
     ■ Berkeley, G. (1996). Principles of human knowledge: Three Dialogues. Oxford: Oxford University Press. (Originally published in 1710.)
     ■ Berlin, I. (1953). The hedgehog and the fox: An essay on Tolstoy's view of history. NY: Simon & Schuster.
     ■ Bierwisch, J. (1970). Semantics. In J. Lyons (Ed.), New horizons in linguistics. Baltimore: Penguin Books.
     ■ Black, H. C. (1951). Black's law dictionary. St. Paul, MN: West Publishing.
     ■ Bobrow, D. G., & D. A. Norman (1975). Some principles of memory schemata. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding: Stud ies in Cognitive Science (pp. 131-149). New York: Academic Press.
     ■ Boden, M. A. (1977). Artificial intelligence and natural man. New York: Basic Books.
     ■ Boden, M. A. (1981). Minds and mechanisms. Ithaca, NY: Cornell University Press.
     ■ Boden, M. A. (1990a). The creative mind: Myths and mechanisms. London: Cardinal.
     ■ Boden, M. A. (1990b). The philosophy of artificial intelligence. Oxford: Oxford University Press.
     ■ Boden, M. A. (1994). Precis of The creative mind: Myths and mechanisms. Behavioral and brain sciences 17, 519-570.
     ■ Boden, M. (1996). Creativity. In M. Boden (Ed.), Artificial Intelligence (2nd ed.). San Diego: Academic Press.
     ■ Bolter, J. D. (1984). Turing's man: Western culture in the computer age. Chapel Hill, NC: University of North Carolina Press.
     ■ Bolton, N. (1972). The psychology of thinking. London: Methuen.
     ■ Bourne, L. E. (1973). Some forms of cognition: A critical analysis of several papers. In R. Solso (Ed.), Contemporary issues in cognitive psychology (pp. 313324). Loyola Symposium on Cognitive Psychology (Chicago 1972). Washington, DC: Winston.
     ■ Bransford, J. D., N. S. McCarrell, J. J. Franks & K. E. Nitsch (1977). Toward unexplaining memory. In R. Shaw & J. D. Bransford (Eds.), Perceiving, acting, and knowing (pp. 431-466). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Breger, L. (1981). Freud's unfinished journey. London: Routledge & Kegan Paul.
     ■ Brehmer, B. (1986). In one word: Not from experience. In H. R. Arkes & K. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader (pp. 705-719). Cambridge: Cambridge University Press.
     ■ Bresnan, J. (1978). A realistic transformational grammar. In M. Halle, J. Bresnan & G. A. Miller (Eds.), Linguistic theory and psychological reality (pp. 1-59). Cambridge, MA: MIT Press.
     ■ Brislin, R. W., W. J. Lonner & R. M. Thorndike (Eds.) (1973). Cross- cultural research methods. New York: Wiley.
     ■ Bronowski, J. (1977). A sense of the future: Essays in natural philosophy. P. E. Ariotti with R. Bronowski (Eds.). Cambridge, MA: MIT Press.
     ■ Bronowski, J. (1978). The origins of knowledge and imagination. New Haven, CT: Yale University Press.
     ■ Brown, R. O. (1973). A first language: The early stages. Cambridge, MA: Harvard University Press.
     ■ Brown, T. (1970). Lectures on the philosophy of the human mind. In R. Brown (Ed.), Between Hume and Mill: An anthology of British philosophy- 1749- 1843 (pp. 330-387). New York: Random House/Modern Library.
     ■ Bruner, J. S., J. Goodnow & G. Austin (1956). A study of thinking. New York: Wiley.
     ■ Campbell, J. (1982). Grammatical man: Information, entropy, language, and life. New York: Simon & Schuster.
     ■ Campbell, J. (1989). The improbable machine. New York: Simon & Schuster.
     ■ Carlyle, T. (1966). On heroes, hero- worship and the heroic in history. Lincoln: University of Nebraska Press. (Originally published in 1841.)
     ■ Carnap, R. (1959). The elimination of metaphysics through logical analysis of language [Ueberwindung der Metaphysik durch logische Analyse der Sprache]. In A. J. Ayer (Ed.), Logical positivism (pp. 60-81) A. Pap (Trans). New York: Free Press. (Originally published in 1932.)
     ■ Cassirer, E. (1946). Language and myth. New York: Harper and Brothers. Reprinted. New York: Dover Publications, 1953.
     ■ Cattell, R. B., & H. J. Butcher (1970). Creativity and personality. In P. E. Vernon (Ed.), Creativity. Harmondsworth, England: Penguin Books.
     ■ Caudill, M., & C. Butler (1990). Naturally intelligent systems. Cambridge, MA: MIT Press/Bradford Books.
     ■ Chandrasekaran, B. (1990). What kind of information processing is intelligence? A perspective on AI paradigms and a proposal. In D. Partridge & R. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 14-46). Cambridge: Cambridge University Press.
     ■ Charniak, E., & McDermott, D. (1985). Introduction to artificial intelligence. Reading, MA: Addison-Wesley.
     ■ Chase, W. G., & H. A. Simon (1988). The mind's eye in chess. In A. Collins & E. E. Smith (Eds.), Readings in cognitive science: A perspective from psychology and artificial intelligence (pp. 461-493). San Mateo, CA: Kaufmann.
     ■ Cheney, D. L., & R. M. Seyfarth (1990). How monkeys see the world: Inside the mind of another species. Chicago: University of Chicago Press.
     ■ Chi, M.T.H., R. Glaser & E. Rees (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 7-73). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Chomsky, N. (1957). Syntactic structures. The Hague: Mouton. Janua Linguarum.
     ■ Chomsky, N. (1964). A transformational approach to syntax. In J. A. Fodor & J. J. Katz (Eds.), The structure of language: Readings in the philosophy of lan guage (pp. 211-245). Englewood Cliffs, NJ: Prentice-Hall.
     ■ Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
     ■ Chomsky, N. (1972). Language and mind (enlarged ed.). New York: Harcourt Brace Jovanovich.
     ■ Chomsky, N. (1979). Language and responsibility. New York: Pantheon.
     ■ Chomsky, N. (1986). Knowledge of language: Its nature, origin and use. New York: Praeger Special Studies.
     ■ Churchland, P. (1979). Scientific realism and the plasticity of mind. New York: Cambridge University Press.
     ■ Churchland, P. M. (1989). A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, MA: MIT Press.
     ■ Churchland, P. S. (1986). Neurophilosophy. Cambridge, MA: MIT Press/Bradford Books.
     ■ Clark, A. (1996). Philosophical Foundations. In M. A. Boden (Ed.), Artificial in telligence (2nd ed.). San Diego: Academic Press.
     ■ Clark, H. H., & T. B. Carlson (1981). Context for comprehension. In J. Long & A. Baddeley (Eds.), Attention and performance (Vol. 9, pp. 313-330). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Clarke, A. C. (1984). Profiles of the future: An inquiry into the limits of the possible. New York: Holt, Rinehart & Winston.
     ■ Claxton, G. (1980). Cognitive psychology: A suitable case for what sort of treatment? In G. Claxton (Ed.), Cognitive psychology: New directions (pp. 1-25). London: Routledge & Kegan Paul.
     ■ Code, M. (1985). Order and organism. Albany, NY: State University of New York Press.
     ■ Collingwood, R. G. (1972). The idea of history. New York: Oxford University Press.
     ■ Coopersmith, S. (1967). The antecedents of self- esteem. San Francisco: W. H. Freeman.
     ■ Copland, A. (1952). Music and imagination. London: Oxford University Press.
     ■ Coren, S. (1994). The intelligence of dogs. New York: Bantam Books.
     ■ Cottingham, J. (Ed.) (1996). Western philosophy: An anthology. Oxford: Blackwell Publishers.
     ■ Cox, C. (1926). The early mental traits of three hundred geniuses. Stanford, CA: Stanford University Press.
     ■ Craik, K.J.W. (1943). The nature of explanation. Cambridge: Cambridge University Press.
     ■ Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: HarperCollins.
     ■ Cronbach, L. J., & R. E. Snow (1977). Aptitudes and instructional methods. New York: Irvington. Paperback edition, 1981.
     ■ Csikszentmihalyi, M. (1993). The evolving self. New York: Harper Perennial.
     ■ Culler, J. (1976). Ferdinand de Saussure. New York: Penguin Books.
     ■ Curtius, E. R. (1973). European literature and the Latin Middle Ages. W. R. Trask (Trans.). Princeton, NJ: Princeton University Press.
     ■ D'Alembert, J.L.R. (1963). Preliminary discourse to the encyclopedia of Diderot. R. N. Schwab (Trans.). Indianapolis: Bobbs-Merrill.
     ■ Damasio, A. (1994). Descartes' error: Emotion, reason, and the human brain. New York: Avon.
     ■ Dampier, W. C. (1966). A history of modern science. Cambridge: Cambridge University Press.
     ■ Darwin, C. (1911). The life and letters of Charles Darwin (Vol. 1). Francis Darwin (Ed.). New York: Appleton.
     ■ Davidson, D. (1970) Mental events. In L. Foster & J. W. Swanson (Eds.), Experience and theory (pp. 79-101). Amherst: University of Massachussetts Press.
     ■ Davies, P. (1995). About time: Einstein's unfinished revolution. New York: Simon & Schuster/Touchstone.
     ■ Davis, R., & J. J. King (1977). An overview of production systems. In E. Elcock & D. Michie (Eds.), Machine intelligence 8. Chichester, England: Ellis Horwood.
     ■ Davis, R., & D. B. Lenat (1982). Knowledge- based systems in artificial intelligence. New York: McGraw-Hill.
     ■ Dawkins, R. (1982). The extended phenotype: The gene as the unit of selection. Oxford: W. H. Freeman.
     ■ deKleer, J., & J. S. Brown (1983). Assumptions and ambiguities in mechanistic mental models (1983). In D. Gentner & A. L. Stevens (Eds.), Mental modes (pp. 155-190). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Dennett, D. C. (1978a). Brainstorms: Philosophical essays on mind and psychology. Montgomery, VT: Bradford Books.
     ■ Dennett, D. C. (1978b). Toward a cognitive theory of consciousness. In D. C. Dennett, Brainstorms: Philosophical Essays on Mind and Psychology. Montgomery, VT: Bradford Books.
     ■ Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster/Touchstone.
     ■ Descartes, R. (1897-1910). Traite de l'homme. In Oeuvres de Descartes (Vol. 11, pp. 119-215). Paris: Charles Adam & Paul Tannery. (Originally published in 1634.)
     ■ Descartes, R. (1950). Discourse on method. L. J. Lafleur (Trans.). New York: Liberal Arts Press. (Originally published in 1637.)
     ■ Descartes, R. (1951). Meditation on first philosophy. L. J. Lafleur (Trans.). New York: Liberal Arts Press. (Originally published in 1641.)
     ■ Descartes, R. (1955). The philosophical works of Descartes. E. S. Haldane and G.R.T. Ross (Trans.). New York: Dover. (Originally published in 1911 by Cambridge University Press.)
     ■ Descartes, R. (1967). Discourse on method (Pt. V). In E. S. Haldane and G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 106-118). Cambridge: Cambridge University Press. (Originally published in 1637.)
     ■ Descartes, R. (1970a). Discourse on method. In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 181-200). Cambridge: Cambridge University Press. (Originally published in 1637.)
     ■ Descartes, R. (1970b). Principles of philosophy. In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 178-291). Cambridge: Cambridge University Press. (Originally published in 1644.)
     ■ Descartes, R. (1984). Meditations on first philosophy. In J. Cottingham, R. Stoothoff & D. Murduch (Trans.), The philosophical works of Descartes (Vol. 2). Cambridge: Cambridge University Press. (Originally published in 1641.)
     ■ Descartes, R. (1986). Meditations on first philosophy. J. Cottingham (Trans.). Cambridge: Cambridge University Press. (Originally published in 1641 as Med itationes de prima philosophia.)
     ■ deWulf, M. (1956). An introduction to scholastic philosophy. Mineola, NY: Dover Books.
     ■ Dixon, N. F. (1981). Preconscious processing. London: Wiley.
     ■ Doyle, A. C. (1986). The Boscombe Valley mystery. In Sherlock Holmes: The com plete novels and stories (Vol. 1). New York: Bantam.
     ■ Dreyfus, H., & S. Dreyfus (1986). Mind over machine. New York: Free Press.
     ■ Dreyfus, H. L. (1972). What computers can't do: The limits of artificial intelligence (revised ed.). New York: Harper & Row.
     ■ Dreyfus, H. L., & S. E. Dreyfus (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press.
     ■ Edelman, G. M. (1992). Bright air, brilliant fire: On the matter of the mind. New York: Basic Books.
     ■ Ehrenzweig, A. (1967). The hidden order of art. London: Weidenfeld & Nicolson.
     ■ Einstein, A., & L. Infeld (1938). The evolution of physics. New York: Simon & Schuster.
     ■ Eisenstein, S. (1947). Film sense. New York: Harcourt, Brace & World.
     ■ Everdell, W. R. (1997). The first moderns. Chicago: University of Chicago Press.
     ■ Eysenck, M. W. (1977). Human memory: Theory, research and individual difference. Oxford: Pergamon.
     ■ Eysenck, M. W. (1982). Attention and arousal: Cognition and performance. Berlin: Springer.
     ■ Eysenck, M. W. (1984). A handbook of cognitive psychology. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Fancher, R. E. (1979). Pioneers of psychology. New York: W. W. Norton.
     ■ Farrell, B. A. (1981). The standing of psychoanalysis. New York: Oxford University Press.
     ■ Feldman, D. H. (1980). Beyond universals in cognitive development. Norwood, NJ: Ablex.
     ■ Fetzer, J. H. (1996). Philosophy and cognitive science (2nd ed.). New York: Paragon House.
     ■ Finke, R. A. (1990). Creative imagery: Discoveries and inventions in visualization. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Flanagan, O. (1991). The science of the mind. Cambridge MA: MIT Press/Bradford Books.
     ■ Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press/Bradford Books.
     ■ Frege, G. (1972). Conceptual notation. T. W. Bynum (Trans.). Oxford: Clarendon Press. (Originally published in 1879.)
     ■ Frege, G. (1979). Logic. In H. Hermes, F. Kambartel & F. Kaulbach (Eds.), Gottlob Frege: Posthumous writings. Chicago: University of Chicago Press. (Originally published in 1879-1891.)
     ■ Freud, S. (1959). Creative writers and day-dreaming. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 9, pp. 143-153). London: Hogarth Press.
     ■ Freud, S. (1966). Project for a scientific psychology. In J. Strachey (Ed.), The stan dard edition of the complete psychological works of Sigmund Freud (Vol. 1, pp. 295-398). London: Hogarth Press. (Originally published in 1950 as Aus den AnfaЁngen der Psychoanalyse, in London by Imago Publishing.)
     ■ Freud, S. (1976). Lecture 18-Fixation to traumas-the unconscious. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 16, p. 285). London: Hogarth Press.
     ■ Galileo, G. (1990). Il saggiatore [The assayer]. In S. Drake (Ed.), Discoveries and opinions of Galileo. New York: Anchor Books. (Originally published in 1623.)
     ■ Gassendi, P. (1970). Letter to Descartes. In "Objections and replies." In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 2, pp. 179-240). Cambridge: Cambridge University Press. (Originally published in 1641.)
     ■ Gazzaniga, M. S. (1988). Mind matters: How mind and brain interact to create our conscious lives. Boston: Houghton Mifflin in association with MIT Press/Bradford Books.
     ■ Genesereth, M. R., & N. J. Nilsson (1987). Logical foundations of artificial intelligence. Palo Alto, CA: Morgan Kaufmann.
     ■ Ghiselin, B. (1952). The creative process. New York: Mentor.
     ■ Ghiselin, B. (1985). The creative process. Berkeley, CA: University of California Press. (Originally published in 1952.)
     ■ Gilhooly, K. J. (1996). Thinking: Directed, undirected and creative (3rd ed.). London: Academic Press.
     ■ Glass, A. L., K. J. Holyoak & J. L. Santa (1979). Cognition. Reading, MA: AddisonWesley.
     ■ Goody, J. (1977). The domestication of the savage mind. Cambridge: Cambridge University Press.
     ■ Gruber, H. E. (1980). Darwin on man: A psychological study of scientific creativity (2nd ed.). Chicago: University of Chicago Press.
     ■ Gruber, H. E., & S. Davis (1988). Inching our way up Mount Olympus: The evolving systems approach to creative thinking. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives. Cambridge: Cambridge University Press.
     ■ Guthrie, E. R. (1972). The psychology of learning. New York: Harper. (Originally published in 1935.)
     ■ Habermas, J. (1972). Knowledge and human interests. Boston: Beacon Press.
     ■ Hadamard, J. (1945). The psychology of invention in the mathematical field. Princeton, NJ: Princeton University Press.
     ■ Hand, D. J. (1985). Artificial intelligence and psychiatry. Cambridge: Cambridge University Press.
     ■ Harris, M. (1981). The language myth. London: Duckworth.
     ■ Haugeland, J. (Ed.) (1981). Mind design: Philosophy, psychology, artificial intelligence. Cambridge, MA: MIT Press/Bradford Books.
     ■ Haugeland, J. (1981a). The nature and plausibility of cognitivism. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 243-281). Cambridge, MA: MIT Press.
     ■ Haugeland, J. (1981b). Semantic engines: An introduction to mind design. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 1-34). Cambridge, MA: MIT Press/Bradford Books.
     ■ Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, MA: MIT Press.
     ■ Hawkes, T. (1977). Structuralism and semiotics. Berkeley: University of California Press.
     ■ Hebb, D. O. (1949). The organisation of behaviour. New York: Wiley.
     ■ Hebb, D. O. (1958). A textbook of psychology. Philadelphia: Saunders.
     ■ Hegel, G.W.F. (1910). The phenomenology of mind. J. B. Baille (Trans.). London: Sonnenschein. (Originally published as Phaenomenologie des Geistes, 1807.)
     ■ Heisenberg, W. (1958). Physics and philosophy. New York: Harper & Row.
     ■ Hempel, C. G. (1966). Philosophy of natural science. Englewood Cliffs, NJ: PrenticeHall.
     ■ Herman, A. (1997). The idea of decline in Western history. New York: Free Press.
     ■ Herrnstein, R. J., & E. G. Boring (Eds.) (1965). A source book in the history of psy chology. Cambridge, MA: Harvard University Press.
     ■ Herzmann, E. (1964). Mozart's creative process. In P. H. Lang (Ed.), The creative world of Mozart (pp. 17-30). London: Oldbourne Press.
     ■ Hilgard, E. R. (1957). Introduction to psychology. London: Methuen.
     ■ Hobbes, T. (1651). Leviathan. London: Crooke.
     ■ Hofstadter, D. R. (1979). Goedel, Escher, Bach: An eternal golden braid. New York: Basic Books.
     ■ Holliday, S. G., & M. J. Chandler (1986). Wisdom: Explorations in adult competence. Basel, Switzerland: Karger.
     ■ Horn, J. L. (1986). In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 3). Hillsdale, NJ: Erlbaum.
     ■ Hull, C. (1943). Principles of behavior. New York: Appleton-Century-Crofts.
     ■ Hume, D. (1955). An inquiry concerning human understanding. New York: Liberal Arts Press. (Originally published in 1748.)
     ■ Hume, D. (1975). An enquiry concerning human understanding. In L. A. SelbyBigge (Ed.), Hume's enquiries (3rd. ed., revised P. H. Nidditch). Oxford: Clarendon. (Spelling and punctuation revised.) (Originally published in 1748.)
     ■ Hume, D. (1978). A treatise of human nature. L. A. Selby-Bigge (Ed.), Hume's enquiries (3rd. ed., revised P. H. Nidditch). Oxford: Clarendon. (With some modifications of spelling and punctuation.) (Originally published in 1690.)
     ■ Hunt, E. (1973). The memory we must have. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language. (pp. 343-371) San Francisco: W. H. Freeman.
     ■ Husserl, E. (1960). Cartesian meditations. The Hague: Martinus Nijhoff.
     ■ Inhelder, B., & J. Piaget (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books. (Originally published in 1955 as De la logique de l'enfant a` la logique de l'adolescent. [Paris: Presses Universitaire de France])
     ■ James, W. (1890a). The principles of psychology (Vol. 1). New York: Dover Books.
     ■ James, W. (1890b). The principles of psychology. New York: Henry Holt.
     ■ Jevons, W. S. (1900). The principles of science (2nd ed.). London: Macmillan.
     ■ Johnson, G. (1986). Machinery of the mind: Inside the new science of artificial intelli gence. New York: Random House.
     ■ Johnson, M. L. (1988). Mind, language, machine. New York: St. Martin's Press.
     ■ Johnson-Laird, P. N. (1983). Mental models: Toward a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.
     ■ Johnson-Laird, P. N. (1988). The computer and the mind: An introduction to cognitive science. Cambridge, MA: Harvard University Press.
     ■ Jones, E. (1961). The life and work of Sigmund Freud. L. Trilling & S. Marcus (Eds.). London: Hogarth.
     ■ Jones, R. V. (1985). Complementarity as a way of life. In A. P. French & P. J. Kennedy (Eds.), Niels Bohr: A centenary volume. Cambridge, MA: Harvard University Press.
     ■ Kant, I. (1933). Critique of Pure Reason (2nd ed.). N. K. Smith (Trans.). London: Macmillan. (Originally published in 1781 as Kritik der reinen Vernunft.)
     ■ Kant, I. (1891). Solution of the general problems of the Prolegomena. In E. Belfort (Trans.), Kant's Prolegomena. London: Bell. (With minor modifications.) (Originally published in 1783.)
     ■ Katona, G. (1940). Organizing and memorizing: Studies in the psychology of learning and teaching. New York: Columbia University Press.
     ■ Kaufman, A. S. (1979). Intelligent testing with the WISC-R. New York: Wiley.
     ■ Koestler, A. (1964). The act of creation. New York: Arkana (Penguin).
     ■ Kohlberg, L. (1971). From is to ought. In T. Mischel (Ed.), Cognitive development and epistemology. (pp. 151-235) New York: Academic Press.
     ■ KoЁhler, W. (1925). The mentality of apes. New York: Liveright.
     ■ KoЁhler, W. (1927). The mentality of apes (2nd ed.). Ella Winter (Trans.). London: Routledge & Kegan Paul.
     ■ KoЁhler, W. (1930). Gestalt psychology. London: G. Bell.
     ■ KoЁhler, W. (1947). Gestalt psychology. New York: Liveright.
     ■ KoЁhler, W. (1969). The task of Gestalt psychology. Princeton, NJ: Princeton University Press.
     ■ Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.
     ■ Langer, E. J. (1989). Mindfulness. Reading, MA: Addison-Wesley.
     ■ Langer, S. (1962). Philosophical sketches. Baltimore: Johns Hopkins University Press.
     ■ Langley, P., H. A. Simon, G. L. Bradshaw & J. M. Zytkow (1987). Scientific dis covery: Computational explorations of the creative process. Cambridge, MA: MIT Press.
     ■ Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior, the Hixon Symposium (pp. 112-146) New York: Wiley.
     ■ LeDoux, J. E., & W. Hirst (1986). Mind and brain: Dialogues in cognitive neuroscience. Cambridge: Cambridge University Press.
     ■ Lehnert, W. (1978). The process of question answering. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Leiber, J. (1991). Invitation to cognitive science. Oxford: Blackwell.
     ■ Lenat, D. B., & G. Harris (1978). Designing a rule system that searches for scientific discoveries. In D. A. Waterman & F. Hayes-Roth (Eds.), Pattern directed inference systems (pp. 25-52) New York: Academic Press.
     ■ Levenson, T. (1995). Measure for measure: A musical history of science. New York: Touchstone. (Originally published in 1994.)
     ■ Leґvi-Strauss, C. (1963). Structural anthropology. C. Jacobson & B. Grundfest Schoepf (Trans.). New York: Basic Books. (Originally published in 1958.)
     ■ Levine, M. W., & J. M. Schefner (1981). Fundamentals of sensation and perception. London: Addison-Wesley.
     ■ Lewis, C. I. (1946). An analysis of knowledge and valuation. LaSalle, IL: Open Court.
     ■ Lighthill, J. (1972). A report on artificial intelligence. Unpublished manuscript, Science Research Council.
     ■ Lipman, M., A. M. Sharp & F. S. Oscanyan (1980). Philosophy in the classroom. Philadelphia: Temple University Press.
     ■ Lippmann, W. (1965). Public opinion. New York: Free Press. (Originally published in 1922.)
     ■ Locke, J. (1956). An essay concerning human understanding. Chicago: Henry Regnery Co. (Originally published in 1690.)
     ■ Locke, J. (1975). An essay concerning human understanding. P. H. Nidditch (Ed.). Oxford: Clarendon. (Originally published in 1690.) (With spelling and punctuation modernized and some minor modifications of phrasing.)
     ■ Lopate, P. (1994). The art of the personal essay. New York: Doubleday/Anchor Books.
     ■ Lorimer, F. (1929). The growth of reason. London: Kegan Paul. Machlup, F., & U. Mansfield (Eds.) (1983). The study of information. New York: Wiley.
     ■ Manguel, A. (1996). A history of reading. New York: Viking.
     ■ Margolis, H. (1987). Patterns, thinking, and cognition. Chicago: University of Chicago Press.
     ■ Markey, J. F. (1928). The symbolic process. London: Kegan Paul.
     ■ Martin, R. M. (1969). On Ziff's "Natural and formal languages." In S. Hook (Ed.), Language and philosophy: A symposium (pp. 249-263). New York: New York University Press.
     ■ Mazlish, B. (1993). The fourth discontinuity: the co- evolution of humans and machines. New Haven, CT: Yale University Press.
     ■ McCarthy, J., & P. J. Hayes (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer & D. Michie (Eds.), Machine intelligence 4. Edinburgh: Edinburgh University Press.
     ■ McClelland, J. L., D. E. Rumelhart & G. E. Hinton (1986). The appeal of parallel distributed processing. In D. E. Rumelhart, J. L. McClelland & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the mi crostructure of cognition (Vol. 1, pp. 3-40). Cambridge, MA: MIT Press/ Bradford Books.
     ■ McCorduck, P. (1979). Machines who think. San Francisco: W. H. Freeman.
     ■ McLaughlin, T. (1970). Music and communication. London: Faber & Faber.
     ■ Mednick, S. A. (1962). The associative basis of the creative process. Psychological Review 69, 431-436.
     ■ Meehl, P. E., & C. J. Golden (1982). Taxometric methods. In Kendall, P. C., & Butcher, J. N. (Eds.), Handbook of research methods in clinical psychology (pp. 127-182). New York: Wiley.
     ■ Mehler, J., E.C.T. Walker & M. Garrett (Eds.) (1982). Perspectives on mental rep resentation: Experimental and theoretical studies of cognitive processes and ca pacities. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Mill, J. S. (1900). A system of logic, ratiocinative and inductive: Being a connected view of the principles of evidence and the methods of scientific investigation. London: Longmans, Green.
     ■ Miller, G. A. (1979, June). A very personal history. Talk to the Cognitive Science Workshop, Cambridge, MA.
     ■ Miller, J. (1983). States of mind. New York: Pantheon Books.
     ■ Minsky, M. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision (pp. 211-277). New York: McGrawHill.
     ■ Minsky, M., & S. Papert (1973). Artificial intelligence. Condon Lectures, Oregon State System of Higher Education, Eugene, Oregon.
     ■ Minsky, M. L. (1986). The society of mind. New York: Simon & Schuster.
     ■ Mischel, T. (1976). Psychological explanations and their vicissitudes. In J. K. Cole & W. J. Arnold (Eds.), Nebraska Symposium on motivation (Vol. 23). Lincoln, NB: University of Nebraska Press.
     ■ Morford, M.P.O., & R. J. Lenardon (1995). Classical mythology (5th ed.). New York: Longman.
     ■ Murdoch, I. (1954). Under the net. New York: Penguin.
     ■ Nagel, E. (1959). Methodological issues in psychoanalytic theory. In S. Hook (Ed.), Psychoanalysis, scientific method, and philosophy: A symposium. New York: New York University Press.
     ■ Nagel, T. (1979). Mortal questions. London: Cambridge University Press.
     ■ Nagel, T. (1986). The view from nowhere. Oxford: Oxford University Press.
     ■ Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
     ■ Neisser, U. (1972). Changing conceptions of imagery. In P. W. Sheehan (Ed.), The function and nature of imagery (pp. 233-251). London: Academic Press.
     ■ Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman.
     ■ Neisser, U. (1978). Memory: What are the important questions? In M. M. Gruneberg, P. E. Morris & R. N. Sykes (Eds.), Practical aspects of memory (pp. 3-24). London: Academic Press.
     ■ Neisser, U. (1979). The concept of intelligence. In R. J. Sternberg & D. K. Detterman (Eds.), Human intelligence: Perspectives on its theory and measurement (pp. 179-190). Norwood, NJ: Ablex.
     ■ Nersessian, N. (1992). How do scientists think? Capturing the dynamics of conceptual change in science. In R. N. Giere (Ed.), Cognitive models of science (pp. 3-44). Minneapolis: University of Minnesota Press.
     ■ Newell, A. (1973a). Artificial intelligence and the concept of mind. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language (pp. 1-60). San Francisco: W. H. Freeman.
     ■ Newell, A. (1973b). You can't play 20 questions with nature and win. In W. G. Chase (Ed.), Visual information processing (pp. 283-310). New York: Academic Press.
     ■ Newell, A., & H. A. Simon (1963). GPS: A program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 279-293). New York & McGraw-Hill.
     ■ Newell, A., & H. A. Simon (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
     ■ Nietzsche, F. (1966). Beyond good and evil. W. Kaufmann (Trans.). New York: Vintage. (Originally published in 1885.)
     ■ Nilsson, N. J. (1971). Problem- solving methods in artificial intelligence. New York: McGraw-Hill.
     ■ Nussbaum, M. C. (1978). Aristotle's Princeton University Press. De Motu Anamalium. Princeton, NJ:
     ■ Oersted, H. C. (1920). Thermo-electricity. In Kirstine Meyer (Ed.), H. C. Oersted, Natuurvidenskabelige Skrifter (Vol. 2). Copenhagen: n.p. (Originally published in 1830 in The Edinburgh encyclopaedia.)
     ■ Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.
     ■ Onians, R. B. (1954). The origins of European thought. Cambridge, MA: Cambridge University Press.
     ■ Osgood, C. E. (1960). Method and theory in experimental psychology. New York: Oxford University Press. (Originally published in 1953.)
     ■ Osgood, C. E. (1966). Language universals and psycholinguistics. In J. H. Greenberg (Ed.), Universals of language (2nd ed., pp. 299-322). Cambridge, MA: MIT Press.
     ■ Palmer, R. E. (1969). Hermeneutics. Evanston, IL: Northwestern University Press.
     ■ Peirce, C. S. (1934). Some consequences of four incapacities-Man, a sign. In C. Hartsborne & P. Weiss (Eds.), Collected papers of Charles Saunders Peirce (Vol. 5, pp. 185-189). Cambridge, MA: Harvard University Press.
     ■ Penfield, W. (1959). In W. Penfield & L. Roberts, Speech and brain mechanisms. Princeton, NJ: Princeton University Press.
     ■ Penrose, R. (1994). Shadows of the mind: A search for the missing science of conscious ness. Oxford: Oxford University Press.
     ■ Perkins, D. N. (1981). The mind's best work. Cambridge, MA: Harvard University Press.
     ■ Peterfreund, E. (1986). The heuristic approach to psychoanalytic therapy. In
     ■ J. Reppen (Ed.), Analysts at work, (pp. 127-144). Hillsdale, NJ: Analytic Press.
     ■ Piaget, J. (1952). The origin of intelligence in children. New York: International Universities Press. (Originally published in 1936.)
     ■ Piaget, J. (1954). Le langage et les opeґrations intellectuelles. Proble` mes de psycho linguistique. Symposium de l'Association de Psychologie Scientifique de Langue Francёaise. Paris: Presses Universitaires de France.
     ■ Piaget, J. (1977). Problems of equilibration. In H. E. Gruber & J. J. Voneche (Eds.), The essential Piaget (pp. 838-841). London: Routlege & Kegan Paul. (Originally published in 1975 as L'eґquilibration des structures cognitives [Paris: Presses Universitaires de France].)
     ■ Piaget, J., & B. Inhelder. (1973). Memory and intelligence. New York: Basic Books.
     ■ Pinker, S. (1994). The language instinct. New York: Morrow.
     ■ Pinker, S. (1996). Facts about human language relevant to its evolution. In J.-P. Changeux & J. Chavaillon (Eds.), Origins of the human brain. A symposium of the Fyssen foundation (pp. 262-283). Oxford: Clarendon Press. Planck, M. (1949). Scientific autobiography and other papers. F. Gaynor (Trans.). New York: Philosophical Library.
     ■ Planck, M. (1990). Wissenschaftliche Selbstbiographie. W. Berg (Ed.). Halle, Germany: Deutsche Akademie der Naturforscher Leopoldina.
     ■ Plato (1892). Meno. In The Dialogues of Plato (B. Jowett, Trans.; Vol. 2). New York: Clarendon. (Originally published circa 380 B.C.)
     ■ Poincareґ, H. (1913). Mathematical creation. In The foundations of science. G. B. Halsted (Trans.). New York: Science Press.
     ■ Poincareґ, H. (1921). The foundations of science: Science and hypothesis, the value of science, science and method. G. B. Halstead (Trans.). New York: Science Press.
     ■ Poincareґ, H. (1929). The foundations of science: Science and hypothesis, the value of science, science and method. New York: Science Press.
     ■ Poincareґ, H. (1952). Science and method. F. Maitland (Trans.) New York: Dover.
     ■ Polya, G. (1945). How to solve it. Princeton, NJ: Princeton University Press.
     ■ Polanyi, M. (1958). Personal knowledge. London: Routledge & Kegan Paul.
     ■ Popper, K. (1968). Conjectures and refutations: The growth of scientific knowledge. New York: Harper & Row/Basic Books.
     ■ Popper, K., & J. Eccles (1977). The self and its brain. New York: Springer-Verlag.
     ■ Popper, K. R. (1959). The logic of scientific discovery. London: Hutchinson.
     ■ Putnam, H. (1975). Mind, language and reality: Philosophical papers (Vol. 2). Cambridge: Cambridge University Press.
     ■ Putnam, H. (1987). The faces of realism. LaSalle, IL: Open Court.
     ■ Pylyshyn, Z. W. (1981). The imagery debate: Analog media versus tacit knowledge. In N. Block (Ed.), Imagery (pp. 151-206). Cambridge, MA: MIT Press.
     ■ Pylyshyn, Z. W. (1984). Computation and cognition: Towards a foundation for cog nitive science. Cambridge, MA: MIT Press/Bradford Books.
     ■ Quillian, M. R. (1968). Semantic memory. In M. Minsky (Ed.), Semantic information processing (pp. 216-260). Cambridge, MA: MIT Press.
     ■ Quine, W.V.O. (1960). Word and object. Cambridge, MA: Harvard University Press.
     ■ Rabbitt, P.M.A., & S. Dornic (Eds.). Attention and performance (Vol. 5). London: Academic Press.
     ■ Rawlins, G.J.E. (1997). Slaves of the Machine: The quickening of computer technology. Cambridge, MA: MIT Press/Bradford Books.
     ■ Reid, T. (1970). An inquiry into the human mind on the principles of common sense. In R. Brown (Ed.), Between Hume and Mill: An anthology of British philosophy- 1749- 1843 (pp. 151-178). New York: Random House/Modern Library.
     ■ Reitman, W. (1970). What does it take to remember? In D. A. Norman (Ed.), Models of human memory (pp. 470-510). London: Academic Press.
     ■ Ricoeur, P. (1974). Structure and hermeneutics. In D. I. Ihde (Ed.), The conflict of interpretations: Essays in hermeneutics (pp. 27-61). Evanston, IL: Northwestern University Press.
     ■ Robinson, D. N. (1986). An intellectual history of psychology. Madison: University of Wisconsin Press.
     ■ Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
     ■ Rosch, E. (1977). Human categorization. In N. Warren (Ed.), Studies in cross cultural psychology (Vol. 1, pp. 1-49) London: Academic Press.
     ■ Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27-48). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rosch, E., & B. B. Lloyd (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rose, S. (1970). The chemistry of life. Baltimore: Penguin Books.
     ■ Rose, S. (1976). The conscious brain (updated ed.). New York: Random House.
     ■ Rose, S. (1993). The making of memory: From molecules to mind. New York: Anchor Books. (Originally published in 1992)
     ■ Roszak, T. (1994). The cult of information: A neo- Luddite treatise on high- tech, artificial intelligence, and the true art of thinking (2nd ed.). Berkeley: University of California Press.
     ■ Royce, J. R., & W. W. Rozeboom (Eds.) (1972). The psychology of knowing. New York: Gordon & Breach.
     ■ Rumelhart, D. E. (1977). Introduction to human information processing. New York: Wiley.
     ■ Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. Bruce & W. F. Brewer (Eds.), Theoretical issues in reading comprehension. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rumelhart, D. E., & J. L. McClelland (1986). On learning the past tenses of English verbs. In J. L. McClelland & D. E. Rumelhart (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2). Cambridge, MA: MIT Press.
     ■ Rumelhart, D. E., P. Smolensky, J. L. McClelland & G. E. Hinton (1986). Schemata and sequential thought processes in PDP models. In J. L. McClelland, D. E. Rumelhart & the PDP Research Group (Eds.), Parallel Distributed Processing (Vol. 2, pp. 7-57). Cambridge, MA: MIT Press.
     ■ Russell, B. (1927). An outline of philosophy. London: G. Allen & Unwin.
     ■ Russell, B. (1961). History of Western philosophy. London: George Allen & Unwin.
     ■ Russell, B. (1965). How I write. In Portraits from memory and other essays. London: Allen & Unwin.
     ■ Russell, B. (1992). In N. Griffin (Ed.), The selected letters of Bertrand Russell (Vol. 1), The private years, 1884- 1914. Boston: Houghton Mifflin. Ryecroft, C. (1966). Psychoanalysis observed. London: Constable.
     ■ Sagan, C. (1978). The dragons of Eden: Speculations on the evolution of human intel ligence. New York: Ballantine Books.
     ■ Salthouse, T. A. (1992). Expertise as the circumvention of human processing limitations. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172-194). Cambridge: Cambridge University Press.
     ■ Sanford, A. J. (1987). The mind of man: Models of human understanding. New Haven, CT: Yale University Press.
     ■ Sapir, E. (1921). Language. New York: Harcourt, Brace, and World.
     ■ Sapir, E. (1964). Culture, language, and personality. Berkeley: University of California Press. (Originally published in 1941.)
     ■ Sapir, E. (1985). The status of linguistics as a science. In D. G. Mandelbaum (Ed.), Selected writings of Edward Sapir in language, culture and personality (pp. 160166). Berkeley: University of California Press. (Originally published in 1929).
     ■ Scardmalia, M., & C. Bereiter (1992). Literate expertise. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172-194). Cambridge: Cambridge University Press.
     ■ Schafer, R. (1954). Psychoanalytic interpretation in Rorschach testing. New York: Grune & Stratten.
     ■ Schank, R. C. (1973). Identification of conceptualizations underlying natural language. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language (pp. 187-248). San Francisco: W. H. Freeman.
     ■ Schank, R. C. (1976). The role of memory in language processing. In C. N. Cofer (Ed.), The structure of human memory. (pp. 162-189) San Francisco: W. H. Freeman.
     ■ Schank, R. C. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Schank, R. C., & R. P. Abelson (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ SchroЁdinger, E. (1951). Science and humanism. Cambridge: Cambridge University Press.
     ■ Searle, J. R. (1981a). Minds, brains, and programs. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 282-306). Cambridge, MA: MIT Press.
     ■ Searle, J. R. (1981b). Minds, brains and programs. In D. Hofstadter & D. Dennett (Eds.), The mind's I (pp. 353-373). New York: Basic Books.
     ■ Searle, J. R. (1983). Intentionality. New York: Cambridge University Press.
     ■ Serres, M. (1982). The origin of language: Biology, information theory, and thermodynamics. M. Anderson (Trans.). In J. V. Harari & D. F. Bell (Eds.), Hermes: Literature, science, philosophy (pp. 71-83). Baltimore: Johns Hopkins University Press.
     ■ Simon, H. A. (1966). Scientific discovery and the psychology of problem solving. In R. G. Colodny (Ed.), Mind and cosmos: Essays in contemporary science and philosophy (pp. 22-40). Pittsburgh: University of Pittsburgh Press.
     ■ Simon, H. A. (1979). Models of thought. New Haven, CT: Yale University Press.
     ■ Simon, H. A. (1989). The scientist as a problem solver. In D. Klahr & K. Kotovsky (Eds.), Complex information processing: The impact of Herbert Simon. Hillsdale, N.J.: Lawrence Erlbaum Associates.
     ■ Simon, H. A., & C. Kaplan (1989). Foundations of cognitive science. In M. Posner (Ed.), Foundations of cognitive science (pp. 1-47). Cambridge, MA: MIT Press.
     ■ Simonton, D. K. (1988). Creativity, leadership and chance. In R. J. Sternberg (Ed.), The nature of creativity. Cambridge: Cambridge University Press.
     ■ Skinner, B. F. (1974). About behaviorism. New York: Knopf.
     ■ Smith, E. E. (1988). Concepts and thought. In J. Sternberg & E. E. Smith (Eds.), The psychology of human thought (pp. 19-49). Cambridge: Cambridge University Press.
     ■ Smith, E. E. (1990). Thinking: Introduction. In D. N. Osherson & E. E. Smith (Eds.), Thinking. An invitation to cognitive science. (Vol. 3, pp. 1-2). Cambridge, MA: MIT Press.
     ■ Socrates. (1958). Meno. In E. H. Warmington & P. O. Rouse (Eds.), Great dialogues of Plato W.H.D. Rouse (Trans.). New York: New American Library. (Original publication date unknown.)
     ■ Solso, R. L. (1974). Theories of retrieval. In R. L. Solso (Ed.), Theories in cognitive psychology. Potomac, MD: Lawrence Erlbaum Associates.
     ■ Spencer, H. (1896). The principles of psychology. New York: Appleton-CenturyCrofts.
     ■ Steiner, G. (1975). After Babel: Aspects of language and translation. New York: Oxford University Press.
     ■ Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Sternberg, R. J. (1994). Intelligence. In R. J. Sternberg, Thinking and problem solving. San Diego: Academic Press.
     ■ Sternberg, R. J., & J. E. Davidson (1985). Cognitive development in gifted and talented. In F. D. Horowitz & M. O'Brien (Eds.), The gifted and talented (pp. 103-135). Washington, DC: American Psychological Association.
     ■ Storr, A. (1993). The dynamics of creation. New York: Ballantine Books. (Originally published in 1972.)
     ■ Stumpf, S. E. (1994). Philosophy: History and problems (5th ed.). New York: McGraw-Hill.
     ■ Sulloway, F. J. (1996). Born to rebel: Birth order, family dynamics, and creative lives. New York: Random House/Vintage Books.
     ■ Thorndike, E. L. (1906). Principles of teaching. New York: A. G. Seiler.
     ■ Thorndike, E. L. (1970). Animal intelligence: Experimental studies. Darien, CT: Hafner Publishing Co. (Originally published in 1911.)
     ■ Titchener, E. B. (1910). A textbook of psychology. New York: Macmillan.
     ■ Titchener, E. B. (1914). A primer of psychology. New York: Macmillan.
     ■ Toulmin, S. (1957). The philosophy of science. London: Hutchinson.
     ■ Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organisation of memory. London: Academic Press.
     ■ Turing, A. (1946). In B. E. Carpenter & R. W. Doran (Eds.), ACE reports of 1946 and other papers. Cambridge, MA: MIT Press.
     ■ Turkle, S. (1984). Computers and the second self: Computers and the human spirit. New York: Simon & Schuster.
     ■ Tyler, S. A. (1978). The said and the unsaid: Mind, meaning, and culture. New York: Academic Press.
     ■ van Heijenoort (Ed.) (1967). From Frege to Goedel. Cambridge: Harvard University Press.
     ■ Varela, F. J. (1984). The creative circle: Sketches on the natural history of circularity. In P. Watzlawick (Ed.), The invented reality (pp. 309-324). New York: W. W. Norton.
     ■ Voltaire (1961). On the Penseґs of M. Pascal. In Philosophical letters (pp. 119-146). E. Dilworth (Trans.). Indianapolis: Bobbs-Merrill.
     ■ Wagman, M. (1997a). Cognitive science and the symbolic operations of human and artificial intelligence: Theory and research into the intellective processes. Westport, CT: Praeger.
     ■ Wagman, M. (1997b). The general unified theory of intelligence: Central conceptions and specific application to domains of cognitive science. Westport, CT: Praeger.
     ■ Wagman, M. (1998a). Cognitive science and the mind- body problem: From philosophy to psychology to artificial intelligence to imaging of the brain. Westport, CT: Praeger.
     ■ Wagman, M. (1999). The human mind according to artificial intelligence: Theory, re search, and implications. Westport, CT: Praeger.
     ■ Wall, R. (1972). Introduction to mathematical linguistics. Englewood Cliffs, NJ: Prentice-Hall.
     ■ Wallas, G. (1926). The Art of Thought. New York: Harcourt, Brace & Co.
     ■ Wason, P. (1977). Self contradictions. In P. Johnson-Laird & P. Wason (Eds.), Thinking: Readings in cognitive science. Cambridge: Cambridge University Press.
     ■ Wason, P. C., & P. N. Johnson-Laird. (1972). Psychology of reasoning: Structure and content. Cambridge, MA: Harvard University Press.
     ■ Watson, J. (1930). Behaviorism. New York: W. W. Norton.
     ■ Watzlawick, P. (1984). Epilogue. In P. Watzlawick (Ed.), The invented reality. New York: W. W. Norton, 1984.
     ■ Weinberg, S. (1977). The first three minutes: A modern view of the origin of the uni verse. New York: Basic Books.
     ■ Weisberg, R. W. (1986). Creativity: Genius and other myths. New York: W. H. Freeman.
     ■ Weizenbaum, J. (1976). Computer power and human reason: From judgment to cal culation. San Francisco: W. H. Freeman.
     ■ Wertheimer, M. (1945). Productive thinking. New York: Harper & Bros.
     ■ Whitehead, A. N. (1925). Science and the modern world. New York: Macmillan.
     ■ Whorf, B. L. (1956). In J. B. Carroll (Ed.), Language, thought and reality: Selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press.
     ■ Whyte, L. L. (1962). The unconscious before Freud. New York: Anchor Books.
     ■ Wiener, N. (1954). The human use of human beings. Boston: Houghton Mifflin.
     ■ Wiener, N. (1964). God & Golem, Inc.: A comment on certain points where cybernetics impinges on religion. Cambridge, MA: MIT Press.
     ■ Winograd, T. (1972). Understanding natural language. New York: Academic Press.
     ■ Winston, P. H. (1987). Artificial intelligence: A perspective. In E. L. Grimson & R. S. Patil (Eds.), AI in the 1980s and beyond (pp. 1-12). Cambridge, MA: MIT Press.
     ■ Winston, P. H. (Ed.) (1975). The psychology of computer vision. New York: McGrawHill.
     ■ Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.
     ■ Wittgenstein, L. (1958). The blue and brown books. New York: Harper Colophon.
     ■ Woods, W. A. (1975). What's in a link: Foundations for semantic networks. In D. G. Bobrow & A. Collins (Eds.), Representations and understanding: Studies in cognitive science (pp. 35-84). New York: Academic Press.
     ■ Woodworth, R. S. (1938). Experimental psychology. New York: Holt; London: Methuen (1939).
     ■ Wundt, W. (1904). Principles of physiological psychology (Vol. 1). E. B. Titchener (Trans.). New York: Macmillan.
     ■ Wundt, W. (1907). Lectures on human and animal psychology. J. E. Creighton & E. B. Titchener (Trans.). New York: Macmillan.
     ■ Young, J. Z. (1978). Programs of the brain. New York: Oxford University Press.
     ■ Ziman, J. (1978). Reliable knowledge: An exploration of the grounds for belief in science. Cambridge: Cambridge University Press.

    Historical dictionary of quotations in cognitive science > Bibliography

  • 18 Language

       Philosophy is written in that great book, the universe, which is always open, right before our eyes. But one cannot understand this book without first learning to understand the language and to know the characters in which it is written. It is written in the language of mathematics, and the characters are triangles, circles, and other figures. Without these, one cannot understand a single word of it, and just wanders in a dark labyrinth. (Galileo, 1990, p. 232)
       It never happens that it [a nonhuman animal] arranges its speech in various ways in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. (Descartes, 1970a, p. 116)
       It is a very remarkable fact that there are none so depraved and stupid, without even excepting idiots, that they cannot arrange different words together, forming of them a statement by which they make known their thoughts; while, on the other hand, there is no other animal, however perfect and fortunately circumstanced it may be, which can do the same. (Descartes, 1967, p. 116)
       Human beings do not live in the object world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language and that language is merely an incidental means of solving specific problems of communication or reflection. The fact of the matter is that the "real world" is to a large extent unconsciously built on the language habits of the group.... We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation. (Sapir, 1921, p. 75)
       It powerfully conditions all our thinking about social problems and processes.... No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same worlds with different labels attached. (Sapir, 1985, p. 162)
       [A list of language games, not meant to be exhaustive:]
       Giving orders, and obeying them- Describing the appearance of an object, or giving its measurements- Constructing an object from a description (a drawing)Reporting an eventSpeculating about an eventForming and testing a hypothesisPresenting the results of an experiment in tables and diagramsMaking up a story; and reading itPlay actingSinging catchesGuessing riddlesMaking a joke; and telling it
       Solving a problem in practical arithmeticTranslating from one language into another
       LANGUAGE Asking, thanking, cursing, greeting, and praying-. (Wittgenstein, 1953, Pt. I, No. 23, pp. 11 e-12 e)
       We dissect nature along lines laid down by our native languages.... The world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems in our minds.... No individual is free to describe nature with absolute impartiality but is constrained to certain modes of interpretation even while he thinks himself most free. (Whorf, 1956, pp. 153, 213-214)
       We dissect nature along the lines laid down by our native languages.
       The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems in our minds.... We are thus introduced to a new principle of relativity, which holds that all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar or can in some way be calibrated. (Whorf, 1956, pp. 213-214)
       9) The Forms of a Person's Thoughts Are Controlled by Unperceived Patterns of His Own Language
       The forms of a person's thoughts are controlled by inexorable laws of pattern of which he is unconscious. These patterns are the unperceived intricate systematizations of his own language-shown readily enough by a candid comparison and contrast with other languages, especially those of a different linguistic family. (Whorf, 1956, p. 252)
       It has come to be commonly held that many utterances which look like statements are either not intended at all, or only intended in part, to record or impart straightforward information about the facts.... Many traditional philosophical perplexities have arisen through a mistake-the mistake of taking as straightforward statements of fact utterances which are either (in interesting non-grammatical ways) nonsensical or else intended as something quite different. (Austin, 1962, pp. 2-3)
       In general, one might define a complex of semantic components connected by logical constants as a concept. The dictionary of a language is then a system of concepts in which a phonological form and certain syntactic and morphological characteristics are assigned to each concept. This system of concepts is structured by several types of relations. It is supplemented, furthermore, by redundancy or implicational rules..., representing general properties of the whole system of concepts.... At least a relevant part of these general rules is not bound to particular languages, but represents presumably universal structures of natural languages. They are not learned, but are rather a part of the human ability to acquire an arbitrary natural language. (Bierwisch, 1970, pp. 171-172)
       In studying the evolution of mind, we cannot guess to what extent there are physically possible alternatives to, say, transformational generative grammar, for an organism meeting certain other physical conditions characteristic of humans. Conceivably, there are none-or very few-in which case talk about evolution of the language capacity is beside the point. (Chomsky, 1972, p. 98)
       [It is] truth value rather than syntactic well-formedness that chiefly governs explicit verbal reinforcement by parents-which renders mildly paradoxical the fact that the usual product of such a training schedule is an adult whose speech is highly grammatical but not notably truthful. (R. O. Brown, 1973, p. 330)
       he conceptual base is responsible for formally representing the concepts underlying an utterance.... A given word in a language may or may not have one or more concepts underlying it.... On the sentential level, the utterances of a given language are encoded within a syntactic structure of that language. The basic construction of the sentential level is the sentence.
       The next highest level... is the conceptual level. We call the basic construction of this level the conceptualization. A conceptualization consists of concepts and certain relations among those concepts. We can consider that both levels exist at the same point in time and that for any unit on one level, some corresponding realizate exists on the other level. This realizate may be null or extremely complex.... Conceptualizations may relate to other conceptualizations by nesting or other specified relationships. (Schank, 1973, pp. 191-192)
       The mathematics of multi-dimensional interactive spaces and lattices, the projection of "computer behavior" on to possible models of cerebral functions, the theoretical and mechanical investigation of artificial intelligence, are producing a stream of sophisticated, often suggestive ideas.
       But it is, I believe, fair to say that nothing put forward until now in either theoretic design or mechanical mimicry comes even remotely in reach of the most rudimentary linguistic realities. (Steiner, 1975, p. 284)
       The step from the simple tool to the master tool, a tool to make tools (what we would now call a machine tool), seems to me indeed to parallel the final step to human language, which I call reconstitution. It expresses in a practical and social context the same understanding of hierarchy, and shows the same analysis by function as a basis for synthesis. (Bronowski, 1977, pp. 127-128)
        t is the language donn eґ in which we conduct our lives.... We have no other. And the danger is that formal linguistic models, in their loosely argued analogy with the axiomatic structure of the mathematical sciences, may block perception.... It is quite conceivable that, in language, continuous induction from simple, elemental units to more complex, realistic forms is not justified. The extent and formal "undecidability" of context-and every linguistic particle above the level of the phoneme is context-bound-may make it impossible, except in the most abstract, meta-linguistic sense, to pass from "pro-verbs," "kernals," or "deep deep structures" to actual speech. (Steiner, 1975, pp. 111-113)
       A higher-level formal language is an abstract machine. (Weizenbaum, 1976, p. 113)
       Jakobson sees metaphor and metonymy as the characteristic modes of binarily opposed polarities which between them underpin the two-fold process of selection and combination by which linguistic signs are formed.... Thus messages are constructed, as Saussure said, by a combination of a "horizontal" movement, which combines words together, and a "vertical" movement, which selects the particular words from the available inventory or "inner storehouse" of the language. The combinative (or syntagmatic) process manifests itself in contiguity (one word being placed next to another) and its mode is metonymic. The selective (or associative) process manifests itself in similarity (one word or concept being "like" another) and its mode is metaphoric. The "opposition" of metaphor and metonymy therefore may be said to represent in effect the essence of the total opposition between the synchronic mode of language (its immediate, coexistent, "vertical" relationships) and its diachronic mode (its sequential, successive, lineal progressive relationships). (Hawkes, 1977, pp. 77-78)
       It is striking that the layered structure that man has given to language constantly reappears in his analyses of nature. (Bronowski, 1977, p. 121)
       First, [an ideal intertheoretic reduction] provides us with a set of rules"correspondence rules" or "bridge laws," as the standard vernacular has it-which effect a mapping of the terms of the old theory (T o) onto a subset of the expressions of the new or reducing theory (T n). These rules guide the application of those selected expressions of T n in the following way: we are free to make singular applications of their correspondencerule doppelgangers in T o....
       Second, and equally important, a successful reduction ideally has the outcome that, under the term mapping effected by the correspondence rules, the central principles of T o (those of semantic and systematic importance) are mapped onto general sentences of T n that are theorems of Tn. (P. Churchland, 1979, p. 81)
       If non-linguistic factors must be included in grammar: beliefs, attitudes, etc. [this would] amount to a rejection of the initial idealization of language as an object of study. A priori such a move cannot be ruled out, but it must be empirically motivated. If it proves to be correct, I would conclude that language is a chaos that is not worth studying.... Note that the question is not whether beliefs or attitudes, and so on, play a role in linguistic behavior and linguistic judgments... [but rather] whether distinct cognitive structures can be identified, which interact in the real use of language and linguistic judgments, the grammatical system being one of these. (Chomsky, 1979, pp. 140, 152-153)
        23) Language Is Inevitably Influenced by Specific Contexts of Human Interaction
       Language cannot be studied in isolation from the investigation of "rationality." It cannot afford to neglect our everyday assumptions concerning the total behavior of a reasonable person.... An integrational linguistics must recognize that human beings inhabit a communicational space which is not neatly compartmentalized into language and nonlanguage.... It renounces in advance the possibility of setting up systems of forms and meanings which will "account for" a central core of linguistic behavior irrespective of the situation and communicational purposes involved. (Harris, 1981, p. 165)
       By innate [linguistic knowledge], Chomsky simply means "genetically programmed." He does not literally think that children are born with language in their heads ready to be spoken. He merely claims that a "blueprint is there, which is brought into use when the child reaches a certain point in her general development. With the help of this blueprint, she analyzes the language she hears around her more readily than she would if she were totally unprepared for the strange gabbling sounds which emerge from human mouths. (Aitchison, 1987, p. 31)
       Looking at ourselves from the computer viewpoint, we cannot avoid seeing that natural language is our most important "programming language." This means that a vast portion of our knowledge and activity is, for us, best communicated and understood in our natural language.... One could say that natural language was our first great original artifact and, since, as we increasingly realize, languages are machines, so natural language, with our brains to run it, was our primal invention of the universal computer. One could say this except for the sneaking suspicion that language isn't something we invented but something we became, not something we constructed but something in which we created, and recreated, ourselves. (Leiber, 1991, p. 8)

    Historical dictionary of quotations in cognitive science > Language

  • 19 Knowledge

       It is indeed an opinion strangely prevailing amongst men, that houses, mountains, rivers, and, in a word, all sensible objects, have an existence, natural or real, distinct from their being perceived by the understanding. But, with how great an assurance and acquiescence soever this principle may be entertained in the world, yet whoever shall find in his heart to call it into question may, if I mistake not, perceive it to involve a manifest contradiction. For, what are the forementioned objects but things we perceive by sense? and what do we perceive besides our own ideas or sensations? and is it not plainly repugnant that any one of these, or any combination of them, should exist unperceived? (Berkeley, 1996, Pt. I, No. 4, p. 25)
       It seems to me that the only objects of the abstract sciences or of demonstration are quantity and number, and that all attempts to extend this more perfect species of knowledge beyond these bounds are mere sophistry and illusion. As the component parts of quantity and number are entirely similar, their relations become intricate and involved; and nothing can be more curious, as well as useful, than to trace, by a variety of mediums, their equality or inequality, through their different appearances.
       But as all other ideas are clearly distinct and different from each other, we can never advance farther, by our utmost scrutiny, than to observe this diversity, and, by an obvious reflection, pronounce one thing not to be another. Or if there be any difficulty in these decisions, it proceeds entirely from the undeterminate meaning of words, which is corrected by juster definitions. That the square of the hypotenuse is equal to the squares of the other two sides cannot be known, let the terms be ever so exactly defined, without a train of reasoning and enquiry. But to convince us of this proposition, that where there is no property, there can be no injustice, it is only necessary to define the terms, and explain injustice to be a violation of property. This proposition is, indeed, nothing but a more imperfect definition. It is the same case with all those pretended syllogistical reasonings, which may be found in every other branch of learning, except the sciences of quantity and number; and these may safely, I think, be pronounced the only proper objects of knowledge and demonstration. (Hume, 1975, Sec. 12, Pt. 3, pp. 163-165)
       Our knowledge springs from two fundamental sources of the mind; the first is the capacity of receiving representations (the ability to receive impressions), the second is the power to know an object through these representations (spontaneity in the production of concepts).
       Through the first, an object is given to us; through the second, the object is thought in relation to that representation.... Intuition and concepts constitute, therefore, the elements of all our knowledge, so that neither concepts without intuition in some way corresponding to them, nor intuition without concepts, can yield knowledge. Both may be either pure or empirical.... Pure intuitions or pure concepts are possible only a priori; empirical intuitions and empirical concepts only a posteriori. If the receptivity of our mind, its power of receiving representations in so far as it is in any way affected, is to be called "sensibility," then the mind's power of producing representations from itself, the spontaneity of knowledge, should be called "understanding." Our nature is so constituted that our intuitions can never be other than sensible; that is, it contains only the mode in which we are affected by objects. The faculty, on the other hand, which enables us to think the object of sensible intuition is the understanding.... Without sensibility, no object would be given to us; without understanding, no object would be thought. Thoughts without content are empty; intuitions without concepts are blind. It is therefore just as necessary to make our concepts sensible, that is, to add the object to them in intuition, as to make our intuitions intelligible, that is to bring them under concepts. These two powers or capacities cannot exchange their functions. The understanding can intuit nothing, the senses can think nothing. Only through their union can knowledge arise. (Kant, 1933, Sec. 1, Pt. 2, B74-75 [p. 92])
       Metaphysics, as a natural disposition of Reason is real, but it is also, in itself, dialectical and deceptive.... Hence to attempt to draw our principles from it, and in their employment to follow this natural but none the less fallacious illusion can never produce science, but only an empty dialectical art, in which one school may indeed outdo the other, but none can ever attain a justifiable and lasting success. In order that, as a science, it may lay claim not merely to deceptive persuasion, but to insight and conviction, a Critique of Reason must exhibit in a complete system the whole stock of conceptions a priori, arranged according to their different sources-the Sensibility, the understanding, and the Reason; it must present a complete table of these conceptions, together with their analysis and all that can be deduced from them, but more especially the possibility of synthetic knowledge a priori by means of their deduction, the principles of its use, and finally, its boundaries....
       This much is certain: he who has once tried criticism will be sickened for ever of all the dogmatic trash he was compelled to content himself with before, because his Reason, requiring something, could find nothing better for its occupation. Criticism stands to the ordinary school metaphysics exactly in the same relation as chemistry to alchemy, or as astron omy to fortune-telling astrology. I guarantee that no one who has comprehended and thought out the conclusions of criticism, even in these Prolegomena, will ever return to the old sophistical pseudo-science. He will rather look forward with a kind of pleasure to a metaphysics, certainly now within his power, which requires no more preparatory discoveries, and which alone can procure for reason permanent satisfaction. (Kant, 1891, pp. 115-116)
       Knowledge is only real and can only be set forth fully in the form of science, in the form of system. Further, a so-called fundamental proposition or first principle of philosophy, even if it is true, it is yet none the less false, just because and in so far as it is merely a fundamental proposition, merely a first principle. It is for that reason easily refuted. The refutation consists in bringing out its defective character; and it is defective because it is merely the universal, merely a principle, the beginning. If the refutation is complete and thorough, it is derived and developed from the nature of the principle itself, and not accomplished by bringing in from elsewhere other counter-assurances and chance fancies. It would be strictly the development of the principle, and thus the completion of its deficiency, were it not that it misunderstands its own purport by taking account solely of the negative aspect of what it seeks to do, and is not conscious of the positive character of its process and result. The really positive working out of the beginning is at the same time just as much the very reverse: it is a negative attitude towards the principle we start from. Negative, that is to say, in its one-sided form, which consists in being primarily immediate, a mere purpose. It may therefore be regarded as a refutation of what constitutes the basis of the system; but more correctly it should be looked at as a demonstration that the basis or principle of the system is in point of fact merely its beginning. (Hegel, 1910, pp. 21-22)
       Knowledge, action, and evaluation are essentially connected. The primary and pervasive significance of knowledge lies in its guidance of action: knowing is for the sake of doing. And action, obviously, is rooted in evaluation. For a being which did not assign comparative values, deliberate action would be pointless; and for one which did not know, it would be impossible. Conversely, only an active being could have knowledge, and only such a being could assign values to anything beyond his own feelings. A creature which did not enter into the process of reality to alter in some part the future content of it, could apprehend a world only in the sense of intuitive or esthetic contemplation; and such contemplation would not possess the significance of knowledge but only that of enjoying and suffering. (Lewis, 1946, p. 1)
       "Evolutionary epistemology" is a branch of scholarship that applies the evolutionary perspective to an understanding of how knowledge develops. Knowledge always involves getting information. The most primitive way of acquiring it is through the sense of touch: amoebas and other simple organisms know what happens around them only if they can feel it with their "skins." The knowledge such an organism can have is strictly about what is in its immediate vicinity. After a huge jump in evolution, organisms learned to find out what was going on at a distance from them, without having to actually feel the environment. This jump involved the development of sense organs for processing information that was farther away. For a long time, the most important sources of knowledge were the nose, the eyes, and the ears. The next big advance occurred when organisms developed memory. Now information no longer needed to be present at all, and the animal could recall events and outcomes that happened in the past. Each one of these steps in the evolution of knowledge added important survival advantages to the species that was equipped to use it.
       Then, with the appearance in evolution of humans, an entirely new way of acquiring information developed. Up to this point, the processing of information was entirely intrasomatic.... But when speech appeared (and even more powerfully with the invention of writing), information processing became extrasomatic. After that point knowledge did not have to be stored in the genes, or in the memory traces of the brain; it could be passed on from one person to another through words, or it could be written down and stored on a permanent substance like stone, paper, or silicon chips-in any case, outside the fragile and impermanent nervous system. (Csikszentmihalyi, 1993, pp. 56-57)

    Historical dictionary of quotations in cognitive science > Knowledge

  • 20 modular data center

    1. модульный центр обработки данных (ЦОД)

     

    модульный центр обработки данных (ЦОД)
    -
    [Интент]

    Параллельные тексты EN-RU

    [ http://loosebolts.wordpress.com/2008/12/02/our-vision-for-generation-4-modular-data-centers-one-way-of-getting-it-just-right/]

    [ http://dcnt.ru/?p=9299#more-9299]

    Data Centers are a hot topic these days. No matter where you look, this once obscure aspect of infrastructure is getting a lot of attention. For years, there have been cost pressures on IT operations and this, when the need for modern capacity is greater than ever, has thrust data centers into the spotlight. Server and rack density continues to rise, placing DC professionals and businesses in tighter and tougher situations while they struggle to manage their IT environments. And now hyper-scale cloud infrastructure is taking traditional technologies to limits never explored before and focusing the imagination of the IT industry on new possibilities.

    В настоящее время центры обработки данных являются широко обсуждаемой темой. Куда ни посмотришь, этот некогда малоизвестный аспект инфраструктуры привлекает все больше внимания. Годами ИТ-отделы испытывали нехватку средств и это выдвинуло ЦОДы в центр внимания, в то время, когда необходимость в современных ЦОДах стала как никогда высокой. Плотность серверов и стоек продолжают расти, все больше усложняя ситуацию для специалистов в области охлаждения и организаций в их попытках управлять своими ИТ-средами. И теперь гипермасштабируемая облачная инфраструктура подвергает традиционные технологии невиданным ранее нагрузкам, и заставляет ИТ-индустрию искать новые возможности.

    At Microsoft, we have focused a lot of thought and research around how to best operate and maintain our global infrastructure and we want to share those learnings. While obviously there are some aspects that we keep to ourselves, we have shared how we operate facilities daily, our technologies and methodologies, and, most importantly, how we monitor and manage our facilities. Whether it’s speaking at industry events, inviting customers to our “Microsoft data center conferences” held in our data centers, or through other media like blogging and white papers, we believe sharing best practices is paramount and will drive the industry forward. So in that vein, we have some interesting news to share.

    В компании MicroSoft уделяют большое внимание изучению наилучших методов эксплуатации и технического обслуживания своей глобальной инфраструктуры и делятся результатами своих исследований. И хотя мы, конечно, не раскрываем некоторые аспекты своих исследований, мы делимся повседневным опытом эксплуатации дата-центров, своими технологиями и методологиями и, что важнее всего, методами контроля и управления своими объектами. Будь то доклады на отраслевых событиях, приглашение клиентов на наши конференции, которые посвящены центрам обработки данных MicroSoft, и проводятся в этих самых дата-центрах, или использование других средств, например, блоги и спецификации, мы уверены, что обмен передовым опытом имеет первостепенное значение и будет продвигать отрасль вперед.

    Today we are sharing our Generation 4 Modular Data Center plan. This is our vision and will be the foundation of our cloud data center infrastructure in the next five years. We believe it is one of the most revolutionary changes to happen to data centers in the last 30 years. Joining me, in writing this blog are Daniel Costello, my director of Data Center Research and Engineering and Christian Belady, principal power and cooling architect. I feel their voices will add significant value to driving understanding around the many benefits included in this new design paradigm.

    Сейчас мы хотим поделиться своим планом модульного дата-центра четвертого поколения. Это наше видение и оно будет основанием для инфраструктуры наших облачных дата-центров в ближайшие пять лет. Мы считаем, что это одно из самых революционных изменений в дата-центрах за последние 30 лет. Вместе со мной в написании этого блога участвовали Дэниел Костелло, директор по исследованиям и инжинирингу дата-центров, и Кристиан Белади, главный архитектор систем энергоснабжения и охлаждения. Мне кажется, что их авторитет придаст больше веса большому количеству преимуществ, включенных в эту новую парадигму проектирования.

    Our “Gen 4” modular data centers will take the flexibility of containerized servers—like those in our Chicago data center—and apply it across the entire facility. So what do we mean by modular? Think of it like “building blocks”, where the data center will be composed of modular units of prefabricated mechanical, electrical, security components, etc., in addition to containerized servers.

    Was there a key driver for the Generation 4 Data Center?

    Наши модульные дата-центры “Gen 4” будут гибкими с контейнерами серверов – как серверы в нашем чикагском дата-центре. И гибкость будет применяться ко всему ЦОД. Итак, что мы подразумеваем под модульностью? Мы думаем о ней как о “строительных блоках”, где дата-центр будет состоять из модульных блоков изготовленных в заводских условиях электрических систем и систем охлаждения, а также систем безопасности и т.п., в дополнение к контейнеризованным серверам.
    Был ли ключевой стимул для разработки дата-центра четвертого поколения?


    If we were to summarize the promise of our Gen 4 design into a single sentence it would be something like this: “A highly modular, scalable, efficient, just-in-time data center capacity program that can be delivered anywhere in the world very quickly and cheaply, while allowing for continued growth as required.” Sounds too good to be true, doesn’t it? Well, keep in mind that these concepts have been in initial development and prototyping for over a year and are based on cumulative knowledge of previous facility generations and the advances we have made since we began our investments in earnest on this new design.

    Если бы нам нужно было обобщить достоинства нашего проекта Gen 4 в одном предложении, это выглядело бы следующим образом: “Центр обработки данных с высоким уровнем модульности, расширяемости, и энергетической эффективности, а также возможностью постоянного расширения, в случае необходимости, который можно очень быстро и дешево развертывать в любом месте мира”. Звучит слишком хорошо для того чтобы быть правдой, не так ли? Ну, не забывайте, что эти концепции находились в процессе начальной разработки и создания опытного образца в течение более одного года и основываются на опыте, накопленном в ходе развития предыдущих поколений ЦОД, а также успехах, сделанных нами со времени, когда мы начали вкладывать серьезные средства в этот новый проект.

    One of the biggest challenges we’ve had at Microsoft is something Mike likes to call the ‘Goldilock’s Problem’. In a nutshell, the problem can be stated as:

    The worst thing we can do in delivering facilities for the business is not have enough capacity online, thus limiting the growth of our products and services.

    Одну из самых больших проблем, с которыми приходилось сталкиваться Майкрософт, Майк любит называть ‘Проблемой Лютика’. Вкратце, эту проблему можно выразить следующим образом:

    Самое худшее, что может быть при строительстве ЦОД для бизнеса, это не располагать достаточными производственными мощностями, и тем самым ограничивать рост наших продуктов и сервисов.

    The second worst thing we can do in delivering facilities for the business is to have too much capacity online.

    А вторым самым худшим моментом в этой сфере может слишком большое количество производственных мощностей.

    This has led to a focus on smart, intelligent growth for the business — refining our overall demand picture. It can’t be too hot. It can’t be too cold. It has to be ‘Just Right!’ The capital dollars of investment are too large to make without long term planning. As we struggled to master these interesting challenges, we had to ensure that our technological plan also included solutions for the business and operational challenges we faced as well.
    So let’s take a high level look at our Generation 4 design

    Это заставило нас сосредоточиваться на интеллектуальном росте для бизнеса — refining our overall demand picture. Это не должно быть слишком горячим. И это не должно быть слишком холодным. Это должно быть ‘как раз, таким как надо!’ Нельзя делать такие большие капиталовложения без долгосрочного планирования. Пока мы старались решить эти интересные проблемы, мы должны были гарантировать, что наш технологический план будет также включать решения для коммерческих и эксплуатационных проблем, с которыми нам также приходилось сталкиваться.
    Давайте рассмотрим наш проект дата-центра четвертого поколения

    Are you ready for some great visuals? Check out this video at Soapbox. Click here for the Microsoft 4th Gen Video.

    It’s a concept video that came out of my Data Center Research and Engineering team, under Daniel Costello, that will give you a view into what we think is the future.

    From a configuration, construct-ability and time to market perspective, our primary goals and objectives are to modularize the whole data center. Not just the server side (like the Chicago facility), but the mechanical and electrical space as well. This means using the same kind of parts in pre-manufactured modules, the ability to use containers, skids, or rack-based deployments and the ability to tailor the Redundancy and Reliability requirements to the application at a very specific level.


    Посмотрите это видео, перейдите по ссылке для просмотра видео о Microsoft 4th Gen:

    Это концептуальное видео, созданное командой отдела Data Center Research and Engineering, возглавляемого Дэниелом Костелло, которое даст вам наше представление о будущем.

    С точки зрения конфигурации, строительной технологичности и времени вывода на рынок, нашими главными целями и задачами агрегатирование всего дата-центра. Не только серверную часть, как дата-центр в Чикаго, но также системы охлаждения и электрические системы. Это означает применение деталей одного типа в сборных модулях, возможность использования контейнеров, салазок, или стоечных систем, а также возможность подстраивать требования избыточности и надежности для данного приложения на очень специфичном уровне.

    Our goals from a cost perspective were simple in concept but tough to deliver. First and foremost, we had to reduce the capital cost per critical Mega Watt by the class of use. Some applications can run with N-level redundancy in the infrastructure, others require a little more infrastructure for support. These different classes of infrastructure requirements meant that optimizing for all cost classes was paramount. At Microsoft, we are not a one trick pony and have many Online products and services (240+) that require different levels of operational support. We understand that and ensured that we addressed it in our design which will allow us to reduce capital costs by 20%-40% or greater depending upon class.


    Нашими целями в области затрат были концептуально простыми, но трудно реализуемыми. В первую очередь мы должны были снизить капитальные затраты в пересчете на один мегаватт, в зависимости от класса резервирования. Некоторые приложения могут вполне работать на базе инфраструктуры с резервированием на уровне N, то есть без резервирования, а для работы других приложений требуется больше инфраструктуры. Эти разные классы требований инфраструктуры подразумевали, что оптимизация всех классов затрат имеет преобладающее значение. В Майкрософт мы не ограничиваемся одним решением и располагаем большим количеством интерактивных продуктов и сервисов (240+), которым требуются разные уровни эксплуатационной поддержки. Мы понимаем это, и учитываем это в своем проекте, который позволит нам сокращать капитальные затраты на 20%-40% или более в зависимости от класса.

    For example, non-critical or geo redundant applications have low hardware reliability requirements on a location basis. As a result, Gen 4 can be configured to provide stripped down, low-cost infrastructure with little or no redundancy and/or temperature control. Let’s say an Online service team decides that due to the dramatically lower cost, they will simply use uncontrolled outside air with temperatures ranging 10-35 C and 20-80% RH. The reality is we are already spec-ing this for all of our servers today and working with server vendors to broaden that range even further as Gen 4 becomes a reality. For this class of infrastructure, we eliminate generators, chillers, UPSs, and possibly lower costs relative to traditional infrastructure.

    Например, некритичные или гео-избыточные системы имеют низкие требования к аппаратной надежности на основе местоположения. В результате этого, Gen 4 можно конфигурировать для упрощенной, недорогой инфраструктуры с низким уровнем (или вообще без резервирования) резервирования и / или температурного контроля. Скажем, команда интерактивного сервиса решает, что, в связи с намного меньшими затратами, они будут просто использовать некондиционированный наружный воздух с температурой 10-35°C и влажностью 20-80% RH. В реальности мы уже сегодня предъявляем эти требования к своим серверам и работаем с поставщиками серверов над еще большим расширением диапазона температур, так как наш модуль и подход Gen 4 становится реальностью. Для подобного класса инфраструктуры мы удаляем генераторы, чиллеры, ИБП, и, возможно, будем предлагать более низкие затраты, по сравнению с традиционной инфраструктурой.

    Applications that demand higher level of redundancy or temperature control will use configurations of Gen 4 to meet those needs, however, they will also cost more (but still less than traditional data centers). We see this cost difference driving engineering behavioral change in that we predict more applications will drive towards Geo redundancy to lower costs.

    Системы, которым требуется более высокий уровень резервирования или температурного контроля, будут использовать конфигурации Gen 4, отвечающие этим требованиям, однако, они будут также стоить больше. Но все равно они будут стоить меньше, чем традиционные дата-центры. Мы предвидим, что эти различия в затратах будут вызывать изменения в методах инжиниринга, и по нашим прогнозам, это будет выражаться в переходе все большего числа систем на гео-избыточность и меньшие затраты.

    Another cool thing about Gen 4 is that it allows us to deploy capacity when our demand dictates it. Once finalized, we will no longer need to make large upfront investments. Imagine driving capital costs more closely in-line with actual demand, thus greatly reducing time-to-market and adding the capacity Online inherent in the design. Also reduced is the amount of construction labor required to put these “building blocks” together. Since the entire platform requires pre-manufacture of its core components, on-site construction costs are lowered. This allows us to maximize our return on invested capital.

    Еще одно достоинство Gen 4 состоит в том, что он позволяет нам разворачивать дополнительные мощности, когда нам это необходимо. Как только мы закончим проект, нам больше не нужно будет делать большие начальные капиталовложения. Представьте себе возможность более точного согласования капитальных затрат с реальными требованиями, и тем самым значительного снижения времени вывода на рынок и интерактивного добавления мощностей, предусматриваемого проектом. Также снижен объем строительных работ, требуемых для сборки этих “строительных блоков”. Поскольку вся платформа требует предварительного изготовления ее базовых компонентов, затраты на сборку также снижены. Это позволит нам увеличить до максимума окупаемость своих капиталовложений.
    Мы все подвергаем сомнению

    In our design process, we questioned everything. You may notice there is no roof and some might be uncomfortable with this. We explored the need of one and throughout our research we got some surprising (positive) results that showed one wasn’t needed.

    В своем процессе проектирования мы все подвергаем сомнению. Вы, наверное, обратили внимание на отсутствие крыши, и некоторым специалистам это могло не понравиться. Мы изучили необходимость в крыше и в ходе своих исследований получили удивительные результаты, которые показали, что крыша не нужна.
    Серийное производство дата центров


    In short, we are striving to bring Henry Ford’s Model T factory to the data center. http://en.wikipedia.org/wiki/Henry_Ford#Model_T. Gen 4 will move data centers from a custom design and build model to a commoditized manufacturing approach. We intend to have our components built in factories and then assemble them in one location (the data center site) very quickly. Think about how a computer, car or plane is built today. Components are manufactured by different companies all over the world to a predefined spec and then integrated in one location based on demands and feature requirements. And just like Henry Ford’s assembly line drove the cost of building and the time-to-market down dramatically for the automobile industry, we expect Gen 4 to do the same for data centers. Everything will be pre-manufactured and assembled on the pad.

    Мы хотим применить модель автомобильной фабрики Генри Форда к дата-центру. Проект Gen 4 будет способствовать переходу от модели специализированного проектирования и строительства к товарно-производственному, серийному подходу. Мы намерены изготавливать свои компоненты на заводах, а затем очень быстро собирать их в одном месте, в месте строительства дата-центра. Подумайте о том, как сегодня изготавливается компьютер, автомобиль или самолет. Компоненты изготавливаются по заранее определенным спецификациям разными компаниями во всем мире, затем собираются в одном месте на основе спроса и требуемых характеристик. И точно так же как сборочный конвейер Генри Форда привел к значительному уменьшению затрат на производство и времени вывода на рынок в автомобильной промышленности, мы надеемся, что Gen 4 сделает то же самое для дата-центров. Все будет предварительно изготавливаться и собираться на месте.
    Невероятно энергоэффективный ЦОД


    And did we mention that this platform will be, overall, incredibly energy efficient? From a total energy perspective not only will we have remarkable PUE values, but the total cost of energy going into the facility will be greatly reduced as well. How much energy goes into making concrete? Will we need as much of it? How much energy goes into the fuel of the construction vehicles? This will also be greatly reduced! A key driver is our goal to achieve an average PUE at or below 1.125 by 2012 across our data centers. More than that, we are on a mission to reduce the overall amount of copper and water used in these facilities. We believe these will be the next areas of industry attention when and if the energy problem is solved. So we are asking today…“how can we build a data center with less building”?

    А мы упоминали, что эта платформа будет, в общем, невероятно энергоэффективной? С точки зрения общей энергии, мы получим не только поразительные значения PUE, но общая стоимость энергии, затраченной на объект будет также значительно снижена. Сколько энергии идет на производство бетона? Нам нужно будет столько энергии? Сколько энергии идет на питание инженерных строительных машин? Это тоже будет значительно снижено! Главным стимулом является достижение среднего PUE не больше 1.125 для всех наших дата-центров к 2012 году. Более того, у нас есть задача сокращения общего количества меди и воды в дата-центрах. Мы думаем, что эти задачи станут следующей заботой отрасли после того как будет решена энергетическая проблема. Итак, сегодня мы спрашиваем себя…“как можно построить дата-центр с меньшим объемом строительных работ”?
    Строительство дата центров без чиллеров

    We have talked openly and publicly about building chiller-less data centers and running our facilities using aggressive outside economization. Our sincerest hope is that Gen 4 will completely eliminate the use of water. Today’s data centers use massive amounts of water and we see water as the next scarce resource and have decided to take a proactive stance on making water conservation part of our plan.

    Мы открыто и публично говорили о строительстве дата-центров без чиллеров и активном использовании в наших центрах обработки данных технологий свободного охлаждения или фрикулинга. Мы искренне надеемся, что Gen 4 позволит полностью отказаться от использования воды. Современные дата-центры расходуют большие объемы воды и так как мы считаем воду следующим редким ресурсом, мы решили принять упреждающие меры и включить экономию воды в свой план.

    By sharing this with the industry, we believe everyone can benefit from our methodology. While this concept and approach may be intimidating (or downright frightening) to some in the industry, disclosure ultimately is better for all of us.

    Делясь этим опытом с отраслью, мы считаем, что каждый сможет извлечь выгоду из нашей методологией. Хотя эта концепция и подход могут показаться пугающими (или откровенно страшными) для некоторых отраслевых специалистов, раскрывая свои планы мы, в конечном счете, делаем лучше для всех нас.

    Gen 4 design (even more than just containers), could reduce the ‘religious’ debates in our industry. With the central spine infrastructure in place, containers or pre-manufactured server halls can be either AC or DC, air-side economized or water-side economized, or not economized at all (though the sanity of that might be questioned). Gen 4 will allow us to decommission, repair and upgrade quickly because everything is modular. No longer will we be governed by the initial decisions made when constructing the facility. We will have almost unlimited use and re-use of the facility and site. We will also be able to use power in an ultra-fluid fashion moving load from critical to non-critical as use and capacity requirements dictate.

    Проект Gen 4 позволит уменьшить ‘религиозные’ споры в нашей отрасли. Располагая базовой инфраструктурой, контейнеры или сборные серверные могут оборудоваться системами переменного или постоянного тока, воздушными или водяными экономайзерами, или вообще не использовать экономайзеры. Хотя можно подвергать сомнению разумность такого решения. Gen 4 позволит нам быстро выполнять работы по выводу из эксплуатации, ремонту и модернизации, поскольку все будет модульным. Мы больше не будем руководствоваться начальными решениями, принятыми во время строительства дата-центра. Мы сможем использовать этот дата-центр и инфраструктуру в течение почти неограниченного периода времени. Мы также сможем применять сверхгибкие методы использования электрической энергии, переводя оборудование в режимы критической или некритической нагрузки в соответствии с требуемой мощностью.
    Gen 4 – это стандартная платформа

    Finally, we believe this is a big game changer. Gen 4 will provide a standard platform that our industry can innovate around. For example, all modules in our Gen 4 will have common interfaces clearly defined by our specs and any vendor that meets these specifications will be able to plug into our infrastructure. Whether you are a computer vendor, UPS vendor, generator vendor, etc., you will be able to plug and play into our infrastructure. This means we can also source anyone, anywhere on the globe to minimize costs and maximize performance. We want to help motivate the industry to further innovate—with innovations from which everyone can reap the benefits.

    Наконец, мы уверены, что это будет фактором, который значительно изменит ситуацию. Gen 4 будет представлять собой стандартную платформу, которую отрасль сможет обновлять. Например, все модули в нашем Gen 4 будут иметь общепринятые интерфейсы, четко определяемые нашими спецификациями, и оборудование любого поставщика, которое отвечает этим спецификациям можно будет включать в нашу инфраструктуру. Независимо от того производите вы компьютеры, ИБП, генераторы и т.п., вы сможете включать свое оборудование нашу инфраструктуру. Это означает, что мы также сможем обеспечивать всех, в любом месте земного шара, тем самым сводя до минимума затраты и максимальной увеличивая производительность. Мы хотим создать в отрасли мотивацию для дальнейших инноваций – инноваций, от которых каждый сможет получать выгоду.
    Главные характеристики дата-центров четвертого поколения Gen4

    To summarize, the key characteristics of our Generation 4 data centers are:

    Scalable
    Plug-and-play spine infrastructure
    Factory pre-assembled: Pre-Assembled Containers (PACs) & Pre-Manufactured Buildings (PMBs)
    Rapid deployment
    De-mountable
    Reduce TTM
    Reduced construction
    Sustainable measures

    Ниже приведены главные характеристики дата-центров четвертого поколения Gen 4:

    Расширяемость;
    Готовая к использованию базовая инфраструктура;
    Изготовление в заводских условиях: сборные контейнеры (PAC) и сборные здания (PMB);
    Быстрота развертывания;
    Возможность демонтажа;
    Снижение времени вывода на рынок (TTM);
    Сокращение сроков строительства;
    Экологичность;

    Map applications to DC Class

    We hope you join us on this incredible journey of change and innovation!

    Long hours of research and engineering time are invested into this process. There are still some long days and nights ahead, but the vision is clear. Rest assured however, that we as refine Generation 4, the team will soon be looking to Generation 5 (even if it is a bit farther out). There is always room to get better.


    Использование систем электропитания постоянного тока.

    Мы надеемся, что вы присоединитесь к нам в этом невероятном путешествии по миру изменений и инноваций!

    На этот проект уже потрачены долгие часы исследований и проектирования. И еще предстоит потратить много дней и ночей, но мы имеем четкое представление о конечной цели. Однако будьте уверены, что как только мы доведем до конца проект модульного дата-центра четвертого поколения, мы вскоре начнем думать о проекте дата-центра пятого поколения. Всегда есть возможность для улучшений.

    So if you happen to come across Goldilocks in the forest, and you are curious as to why she is smiling you will know that she feels very good about getting very close to ‘JUST RIGHT’.

    Generations of Evolution – some background on our data center designs

    Так что, если вы встретите в лесу девочку по имени Лютик, и вам станет любопытно, почему она улыбается, вы будете знать, что она очень довольна тем, что очень близко подошла к ‘ОПИМАЛЬНОМУ РЕШЕНИЮ’.
    Поколения эволюции – история развития наших дата-центров

    We thought you might be interested in understanding what happened in the first three generations of our data center designs. When Ray Ozzie wrote his Software plus Services memo it posed a very interesting challenge to us. The winds of change were at ‘tornado’ proportions. That “plus Services” tag had some significant (and unstated) challenges inherent to it. The first was that Microsoft was going to evolve even further into an operations company. While we had been running large scale Internet services since 1995, this development lead us to an entirely new level. Additionally, these “services” would span across both Internet and Enterprise businesses. To those of you who have to operate “stuff”, you know that these are two very different worlds in operational models and challenges. It also meant that, to achieve the same level of reliability and performance required our infrastructure was going to have to scale globally and in a significant way.

    Мы подумали, что может быть вам будет интересно узнать историю первых трех поколений наших центров обработки данных. Когда Рэй Оззи написал свою памятную записку Software plus Services, он поставил перед нами очень интересную задачу. Ветра перемен двигались с ураганной скоростью. Это окончание “plus Services” скрывало в себе какие-то значительные и неопределенные задачи. Первая заключалась в том, что Майкрософт собиралась в еще большей степени стать операционной компанией. Несмотря на то, что мы управляли большими интернет-сервисами, начиная с 1995 г., эта разработка подняла нас на абсолютно новый уровень. Кроме того, эти “сервисы” охватывали интернет-компании и корпорации. Тем, кому приходится всем этим управлять, известно, что есть два очень разных мира в области операционных моделей и задач. Это также означало, что для достижения такого же уровня надежности и производительности требовалось, чтобы наша инфраструктура располагала значительными возможностями расширения в глобальных масштабах.

    It was that intense atmosphere of change that we first started re-evaluating data center technology and processes in general and our ideas began to reach farther than what was accepted by the industry at large. This was the era of Generation 1. As we look at where most of the world’s data centers are today (and where our facilities were), it represented all the known learning and design requirements that had been in place since IBM built the first purpose-built computer room. These facilities focused more around uptime, reliability and redundancy. Big infrastructure was held accountable to solve all potential environmental shortfalls. This is where the majority of infrastructure in the industry still is today.

    Именно в этой атмосфере серьезных изменений мы впервые начали переоценку ЦОД-технологий и технологий вообще, и наши идеи начали выходить за пределы общепринятых в отрасли представлений. Это была эпоха ЦОД первого поколения. Когда мы узнали, где сегодня располагается большинство мировых дата-центров и где находятся наши предприятия, это представляло весь опыт и навыки проектирования, накопленные со времени, когда IBM построила первую серверную. В этих ЦОД больше внимания уделялось бесперебойной работе, надежности и резервированию. Большая инфраструктура была призвана решать все потенциальные экологические проблемы. Сегодня большая часть инфраструктуры все еще находится на этом этапе своего развития.

    We soon realized that traditional data centers were quickly becoming outdated. They were not keeping up with the demands of what was happening technologically and environmentally. That’s when we kicked off our Generation 2 design. Gen 2 facilities started taking into account sustainability, energy efficiency, and really looking at the total cost of energy and operations.

    Очень быстро мы поняли, что стандартные дата-центры очень быстро становятся устаревшими. Они не поспевали за темпами изменений технологических и экологических требований. Именно тогда мы стали разрабатывать ЦОД второго поколения. В этих дата-центрах Gen 2 стали принимать во внимание такие факторы как устойчивое развитие, энергетическая эффективность, а также общие энергетические и эксплуатационные.

    No longer did we view data centers just for the upfront capital costs, but we took a hard look at the facility over the course of its life. Our Quincy, Washington and San Antonio, Texas facilities are examples of our Gen 2 data centers where we explored and implemented new ways to lessen the impact on the environment. These facilities are considered two leading industry examples, based on their energy efficiency and ability to run and operate at new levels of scale and performance by leveraging clean hydro power (Quincy) and recycled waste water (San Antonio) to cool the facility during peak cooling months.

    Мы больше не рассматривали дата-центры только с точки зрения начальных капитальных затрат, а внимательно следили за работой ЦОД на протяжении его срока службы. Наши объекты в Куинси, Вашингтоне, и Сан-Антонио, Техас, являются образцами наших ЦОД второго поколения, в которых мы изучали и применяли на практике новые способы снижения воздействия на окружающую среду. Эти объекты считаются двумя ведущими отраслевыми примерами, исходя из их энергетической эффективности и способности работать на новых уровнях производительности, основанных на использовании чистой энергии воды (Куинси) и рециклирования отработанной воды (Сан-Антонио) для охлаждения объекта в самых жарких месяцах.

    As we were delivering our Gen 2 facilities into steel and concrete, our Generation 3 facilities were rapidly driving the evolution of the program. The key concepts for our Gen 3 design are increased modularity and greater concentration around energy efficiency and scale. The Gen 3 facility will be best represented by the Chicago, Illinois facility currently under construction. This facility will seem very foreign compared to the traditional data center concepts most of the industry is comfortable with. In fact, if you ever sit around in our container hanger in Chicago it will look incredibly different from a traditional raised-floor data center. We anticipate this modularization will drive huge efficiencies in terms of cost and operations for our business. We will also introduce significant changes in the environmental systems used to run our facilities. These concepts and processes (where applicable) will help us gain even greater efficiencies in our existing footprint, allowing us to further maximize infrastructure investments.

    Так как наши ЦОД второго поколения строились из стали и бетона, наши центры обработки данных третьего поколения начали их быстро вытеснять. Главными концептуальными особенностями ЦОД третьего поколения Gen 3 являются повышенная модульность и большее внимание к энергетической эффективности и масштабированию. Дата-центры третьего поколения лучше всего представлены объектом, который в настоящее время строится в Чикаго, Иллинойс. Этот ЦОД будет выглядеть очень необычно, по сравнению с общепринятыми в отрасли представлениями о дата-центре. Действительно, если вам когда-либо удастся побывать в нашем контейнерном ангаре в Чикаго, он покажется вам совершенно непохожим на обычный дата-центр с фальшполом. Мы предполагаем, что этот модульный подход будет способствовать значительному повышению эффективности нашего бизнеса в отношении затрат и операций. Мы также внесем существенные изменения в климатические системы, используемые в наших ЦОД. Эти концепции и технологии, если применимо, позволят нам добиться еще большей эффективности наших существующих дата-центров, и тем самым еще больше увеличивать капиталовложения в инфраструктуру.

    This is definitely a journey, not a destination industry. In fact, our Generation 4 design has been under heavy engineering for viability and cost for over a year. While the demand of our commercial growth required us to make investments as we grew, we treated each step in the learning as a process for further innovation in data centers. The design for our future Gen 4 facilities enabled us to make visionary advances that addressed the challenges of building, running, and operating facilities all in one concerted effort.

    Это определенно путешествие, а не конечный пункт назначения. На самом деле, наш проект ЦОД четвертого поколения подвергался серьезным испытаниям на жизнеспособность и затраты на протяжении целого года. Хотя необходимость в коммерческом росте требовала от нас постоянных капиталовложений, мы рассматривали каждый этап своего развития как шаг к будущим инновациям в области дата-центров. Проект наших будущих ЦОД четвертого поколения Gen 4 позволил нам делать фантастические предположения, которые касались задач строительства, управления и эксплуатации объектов как единого упорядоченного процесса.


    Тематики

    Синонимы

    EN

    Англо-русский словарь нормативно-технической терминологии > modular data center

См. также в других словарях:

  • evolution — evolutional, adj. evolutionally, adv. /ev euh looh sheuhn/ or, esp. Brit., /ee veuh /, n. 1. any process of formation or growth; development: the evolution of a language; the evolution of the airplane. 2. a product of such development; something… …   Universalium

  • Evolution — This article is about evolution in biology. For other uses, see Evolution (disambiguation). For a generally accessible and less technical introduction to the topic, see Introduction to evolution. Part of a series on …   Wikipedia

  • Evolution of ageing — Enquiry into the evolution of ageing aims to explain why almost all living things weaken and die with age. There is not yet agreement in the scientific community on a single answer. The evolutionary origin of senescence remains a fundamental… …   Wikipedia

  • Evolution of complexity — The evolution of complexity is an important outcome of the process of evolution. Evolution has produced some remarkably complex organisms although this feature is hard to measure accurately in biology, with properties such as gene content, the… …   Wikipedia

  • process — pro|cess1 W1S2 [ˈprəuses US ˈpra: ] n [Date: 1300 1400; : Old French; Origin: proces, from Latin processus, from procedere; PROCEED] 1.) a series of actions that are done in order to achieve a particular result ▪ the Israeli Egyptian peace… …   Dictionary of contemporary English

  • Evolution Day — HolidayInfo caption=The title page of the 1859 edition of On the Origin of Species holiday name=Evolution Day significance=First publication of The Origin of Species observedby=Various groups and individuals date=November 24 celebrations=Various… …   Wikipedia

  • Evolution Theory — Infobox Album | Name = Evolution Theory (天演論) Type = Album Artist = Candy Lo Released = June 14 2005 Recorded = Hong Kong Genre = C rock Pop rock Alternative rock Length = 44:51 Label = Sony BMG Music Entertainment (Hong Kong) Producer = Kubert… …   Wikipedia

  • Evolution —     Evolution (History and Scientific Foundation)     † Catholic Encyclopedia ► Evolution (History and Scientific Foundation)     The world of organisms comprises a great system of individual forms generally classified according to structural… …   Catholic encyclopedia

  • Evolution (term) — Evolution is a term with many meanings. For instance, Merriam Webster lists biological evolution as one meaning out of a total of six.Evolution is not exclusively a term of biology. There are also evolutionary economics, evolution of languages,… …   Wikipedia

  • Process theory — is a commonly used form of scientific research study in which events or occurrences are said to be the result of certain input states leading to a certain outcome (output) state, following a set process.Process theory holds that if an outcome is… …   Wikipedia

  • Evolution — Ev o*lu tion ([e^]v [ o]*l[=u] sh[u^]n), n. [L. evolutio an unrolling: cf. F. [ e]volution evolution. See {Evolve}.] 1. The act of unfolding or unrolling; hence, any process of growth or development; as, the evolution of a flower from a bud, or… …   The Collaborative International Dictionary of English

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»